00:00:00.000 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 1041 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3703 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.000 Started by timer 00:00:00.041 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.044 The recommended git tool is: git 00:00:00.045 using credential 00000000-0000-0000-0000-000000000002 00:00:00.048 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.063 Fetching changes from the remote Git repository 00:00:00.067 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.084 Using shallow fetch with depth 1 00:00:00.084 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.084 > git --version # timeout=10 00:00:00.100 > git --version # 'git version 2.39.2' 00:00:00.100 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.125 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.125 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.914 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.928 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.944 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:02.944 > git config core.sparsecheckout # timeout=10 00:00:02.955 > git read-tree -mu HEAD # timeout=10 00:00:02.975 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:03.000 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:03.000 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:03.088 [Pipeline] Start of Pipeline 00:00:03.103 [Pipeline] library 00:00:03.105 Loading library shm_lib@master 00:00:03.105 Library shm_lib@master is cached. Copying from home. 00:00:03.122 [Pipeline] node 00:00:03.133 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:03.134 [Pipeline] { 00:00:03.144 [Pipeline] catchError 00:00:03.145 [Pipeline] { 00:00:03.153 [Pipeline] wrap 00:00:03.159 [Pipeline] { 00:00:03.163 [Pipeline] stage 00:00:03.165 [Pipeline] { (Prologue) 00:00:03.176 [Pipeline] echo 00:00:03.177 Node: VM-host-WFP7 00:00:03.181 [Pipeline] cleanWs 00:00:03.191 [WS-CLEANUP] Deleting project workspace... 00:00:03.191 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.198 [WS-CLEANUP] done 00:00:03.349 [Pipeline] setCustomBuildProperty 00:00:03.419 [Pipeline] httpRequest 00:00:03.877 [Pipeline] echo 00:00:03.879 Sorcerer 10.211.164.101 is alive 00:00:03.888 [Pipeline] retry 00:00:03.889 [Pipeline] { 00:00:03.902 [Pipeline] httpRequest 00:00:03.907 HttpMethod: GET 00:00:03.907 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.908 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.908 Response Code: HTTP/1.1 200 OK 00:00:03.909 Success: Status code 200 is in the accepted range: 200,404 00:00:03.909 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.054 [Pipeline] } 00:00:04.070 [Pipeline] // retry 00:00:04.076 [Pipeline] sh 00:00:04.356 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.367 [Pipeline] httpRequest 00:00:04.797 [Pipeline] echo 00:00:04.798 Sorcerer 10.211.164.101 is alive 00:00:04.806 [Pipeline] retry 00:00:04.807 [Pipeline] { 00:00:04.816 [Pipeline] httpRequest 00:00:04.821 HttpMethod: GET 00:00:04.821 URL: http://10.211.164.101/packages/spdk_a5e6ecf28fd8e9a86690362af173cd2cf51891ee.tar.gz 00:00:04.822 Sending request to url: http://10.211.164.101/packages/spdk_a5e6ecf28fd8e9a86690362af173cd2cf51891ee.tar.gz 00:00:04.822 Response Code: HTTP/1.1 200 OK 00:00:04.823 Success: Status code 200 is in the accepted range: 200,404 00:00:04.823 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_a5e6ecf28fd8e9a86690362af173cd2cf51891ee.tar.gz 00:00:26.092 [Pipeline] } 00:00:26.115 [Pipeline] // retry 00:00:26.123 [Pipeline] sh 00:00:26.408 + tar --no-same-owner -xf spdk_a5e6ecf28fd8e9a86690362af173cd2cf51891ee.tar.gz 00:00:28.965 [Pipeline] sh 00:00:29.251 + git -C spdk log --oneline -n5 00:00:29.251 a5e6ecf28 lib/reduce: Data copy logic in thin read operations 00:00:29.251 a333974e5 nvme/rdma: Flush queued send WRs when disconnecting a qpair 00:00:29.251 2b8672176 nvme/rdma: Prevent submitting new recv WR when disconnecting 00:00:29.251 e2dfdf06c accel/mlx5: Register post_poller handler 00:00:29.251 3c8001115 accel/mlx5: More precise condition to update DB 00:00:29.271 [Pipeline] withCredentials 00:00:29.282 > git --version # timeout=10 00:00:29.295 > git --version # 'git version 2.39.2' 00:00:29.314 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:29.316 [Pipeline] { 00:00:29.326 [Pipeline] retry 00:00:29.329 [Pipeline] { 00:00:29.344 [Pipeline] sh 00:00:29.629 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:29.904 [Pipeline] } 00:00:29.921 [Pipeline] // retry 00:00:29.926 [Pipeline] } 00:00:29.942 [Pipeline] // withCredentials 00:00:29.949 [Pipeline] httpRequest 00:00:30.321 [Pipeline] echo 00:00:30.322 Sorcerer 10.211.164.101 is alive 00:00:30.332 [Pipeline] retry 00:00:30.334 [Pipeline] { 00:00:30.344 [Pipeline] httpRequest 00:00:30.348 HttpMethod: GET 00:00:30.349 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:30.349 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:30.368 Response Code: HTTP/1.1 200 OK 00:00:30.369 Success: Status code 200 is in the accepted range: 200,404 00:00:30.369 Saving response body to /var/jenkins/workspace/raid-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:03:34.022 [Pipeline] } 00:03:34.039 [Pipeline] // retry 00:03:34.047 [Pipeline] sh 00:03:34.333 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:03:35.725 [Pipeline] sh 00:03:36.009 + git -C dpdk log --oneline -n5 00:03:36.009 eeb0605f11 version: 23.11.0 00:03:36.009 238778122a doc: update release notes for 23.11 00:03:36.009 46aa6b3cfc doc: fix description of RSS features 00:03:36.009 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:03:36.009 7e421ae345 devtools: support skipping forbid rule check 00:03:36.029 [Pipeline] writeFile 00:03:36.044 [Pipeline] sh 00:03:36.330 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:03:36.343 [Pipeline] sh 00:03:36.628 + cat autorun-spdk.conf 00:03:36.628 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:36.628 SPDK_RUN_ASAN=1 00:03:36.628 SPDK_RUN_UBSAN=1 00:03:36.628 SPDK_TEST_RAID=1 00:03:36.628 SPDK_TEST_NATIVE_DPDK=v23.11 00:03:36.628 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:03:36.628 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:36.635 RUN_NIGHTLY=1 00:03:36.638 [Pipeline] } 00:03:36.653 [Pipeline] // stage 00:03:36.671 [Pipeline] stage 00:03:36.675 [Pipeline] { (Run VM) 00:03:36.690 [Pipeline] sh 00:03:36.975 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:03:36.975 + echo 'Start stage prepare_nvme.sh' 00:03:36.975 Start stage prepare_nvme.sh 00:03:36.975 + [[ -n 7 ]] 00:03:36.975 + disk_prefix=ex7 00:03:36.975 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:03:36.975 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:03:36.975 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:03:36.975 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:36.975 ++ SPDK_RUN_ASAN=1 00:03:36.975 ++ SPDK_RUN_UBSAN=1 00:03:36.975 ++ SPDK_TEST_RAID=1 00:03:36.975 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:03:36.975 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:03:36.975 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:36.975 ++ RUN_NIGHTLY=1 00:03:36.975 + cd /var/jenkins/workspace/raid-vg-autotest 00:03:36.975 + nvme_files=() 00:03:36.975 + declare -A nvme_files 00:03:36.975 + backend_dir=/var/lib/libvirt/images/backends 00:03:36.975 + nvme_files['nvme.img']=5G 00:03:36.975 + nvme_files['nvme-cmb.img']=5G 00:03:36.975 + nvme_files['nvme-multi0.img']=4G 00:03:36.975 + nvme_files['nvme-multi1.img']=4G 00:03:36.975 + nvme_files['nvme-multi2.img']=4G 00:03:36.975 + nvme_files['nvme-openstack.img']=8G 00:03:36.975 + nvme_files['nvme-zns.img']=5G 00:03:36.975 + (( SPDK_TEST_NVME_PMR == 1 )) 00:03:36.975 + (( SPDK_TEST_FTL == 1 )) 00:03:36.975 + (( SPDK_TEST_NVME_FDP == 1 )) 00:03:36.975 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:03:36.975 + for nvme in "${!nvme_files[@]}" 00:03:36.975 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:03:36.975 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:03:36.975 + for nvme in "${!nvme_files[@]}" 00:03:36.975 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:03:36.975 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:03:36.976 + for nvme in "${!nvme_files[@]}" 00:03:36.976 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:03:36.976 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:03:36.976 + for nvme in "${!nvme_files[@]}" 00:03:36.976 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:03:36.976 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:03:36.976 + for nvme in "${!nvme_files[@]}" 00:03:36.976 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:03:36.976 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:03:36.976 + for nvme in "${!nvme_files[@]}" 00:03:36.976 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:03:36.976 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:03:36.976 + for nvme in "${!nvme_files[@]}" 00:03:36.976 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:03:37.236 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:03:37.236 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:03:37.236 + echo 'End stage prepare_nvme.sh' 00:03:37.236 End stage prepare_nvme.sh 00:03:37.250 [Pipeline] sh 00:03:37.536 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:03:37.536 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -H -a -v -f fedora39 00:03:37.536 00:03:37.536 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:03:37.536 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:03:37.536 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:03:37.536 HELP=0 00:03:37.536 DRY_RUN=0 00:03:37.536 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img, 00:03:37.536 NVME_DISKS_TYPE=nvme,nvme, 00:03:37.536 NVME_AUTO_CREATE=0 00:03:37.536 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img, 00:03:37.536 NVME_CMB=,, 00:03:37.536 NVME_PMR=,, 00:03:37.536 NVME_ZNS=,, 00:03:37.536 NVME_MS=,, 00:03:37.536 NVME_FDP=,, 00:03:37.536 SPDK_VAGRANT_DISTRO=fedora39 00:03:37.536 SPDK_VAGRANT_VMCPU=10 00:03:37.536 SPDK_VAGRANT_VMRAM=12288 00:03:37.536 SPDK_VAGRANT_PROVIDER=libvirt 00:03:37.536 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:03:37.536 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:03:37.536 SPDK_OPENSTACK_NETWORK=0 00:03:37.536 VAGRANT_PACKAGE_BOX=0 00:03:37.536 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:03:37.536 FORCE_DISTRO=true 00:03:37.536 VAGRANT_BOX_VERSION= 00:03:37.536 EXTRA_VAGRANTFILES= 00:03:37.536 NIC_MODEL=virtio 00:03:37.536 00:03:37.536 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:03:37.536 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:03:39.447 Bringing machine 'default' up with 'libvirt' provider... 00:03:40.017 ==> default: Creating image (snapshot of base box volume). 00:03:40.017 ==> default: Creating domain with the following settings... 00:03:40.017 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733501961_60407138e18994ef309f 00:03:40.017 ==> default: -- Domain type: kvm 00:03:40.017 ==> default: -- Cpus: 10 00:03:40.017 ==> default: -- Feature: acpi 00:03:40.017 ==> default: -- Feature: apic 00:03:40.017 ==> default: -- Feature: pae 00:03:40.017 ==> default: -- Memory: 12288M 00:03:40.017 ==> default: -- Memory Backing: hugepages: 00:03:40.017 ==> default: -- Management MAC: 00:03:40.017 ==> default: -- Loader: 00:03:40.017 ==> default: -- Nvram: 00:03:40.017 ==> default: -- Base box: spdk/fedora39 00:03:40.017 ==> default: -- Storage pool: default 00:03:40.017 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733501961_60407138e18994ef309f.img (20G) 00:03:40.017 ==> default: -- Volume Cache: default 00:03:40.017 ==> default: -- Kernel: 00:03:40.017 ==> default: -- Initrd: 00:03:40.017 ==> default: -- Graphics Type: vnc 00:03:40.017 ==> default: -- Graphics Port: -1 00:03:40.017 ==> default: -- Graphics IP: 127.0.0.1 00:03:40.017 ==> default: -- Graphics Password: Not defined 00:03:40.017 ==> default: -- Video Type: cirrus 00:03:40.017 ==> default: -- Video VRAM: 9216 00:03:40.017 ==> default: -- Sound Type: 00:03:40.017 ==> default: -- Keymap: en-us 00:03:40.017 ==> default: -- TPM Path: 00:03:40.017 ==> default: -- INPUT: type=mouse, bus=ps2 00:03:40.017 ==> default: -- Command line args: 00:03:40.017 ==> default: -> value=-device, 00:03:40.017 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:03:40.017 ==> default: -> value=-drive, 00:03:40.017 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:03:40.017 ==> default: -> value=-device, 00:03:40.017 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:40.017 ==> default: -> value=-device, 00:03:40.017 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:03:40.017 ==> default: -> value=-drive, 00:03:40.017 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:03:40.017 ==> default: -> value=-device, 00:03:40.017 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:40.017 ==> default: -> value=-drive, 00:03:40.017 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:03:40.017 ==> default: -> value=-device, 00:03:40.017 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:40.017 ==> default: -> value=-drive, 00:03:40.017 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:03:40.017 ==> default: -> value=-device, 00:03:40.017 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:40.277 ==> default: Creating shared folders metadata... 00:03:40.277 ==> default: Starting domain. 00:03:41.725 ==> default: Waiting for domain to get an IP address... 00:03:56.638 ==> default: Waiting for SSH to become available... 00:03:58.019 ==> default: Configuring and enabling network interfaces... 00:04:04.606 default: SSH address: 192.168.121.86:22 00:04:04.606 default: SSH username: vagrant 00:04:04.606 default: SSH auth method: private key 00:04:07.145 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:04:15.428 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:04:22.001 ==> default: Mounting SSHFS shared folder... 00:04:23.379 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:04:23.379 ==> default: Checking Mount.. 00:04:25.288 ==> default: Folder Successfully Mounted! 00:04:25.288 ==> default: Running provisioner: file... 00:04:25.857 default: ~/.gitconfig => .gitconfig 00:04:26.425 00:04:26.425 SUCCESS! 00:04:26.425 00:04:26.425 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:04:26.425 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:04:26.425 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:04:26.425 00:04:26.435 [Pipeline] } 00:04:26.451 [Pipeline] // stage 00:04:26.462 [Pipeline] dir 00:04:26.462 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:04:26.464 [Pipeline] { 00:04:26.477 [Pipeline] catchError 00:04:26.479 [Pipeline] { 00:04:26.492 [Pipeline] sh 00:04:26.774 + + vagrant ssh-config --hostsed vagrant -ne 00:04:26.774 /^Host/,$p 00:04:26.774 + tee ssh_conf 00:04:30.066 Host vagrant 00:04:30.066 HostName 192.168.121.86 00:04:30.066 User vagrant 00:04:30.066 Port 22 00:04:30.066 UserKnownHostsFile /dev/null 00:04:30.066 StrictHostKeyChecking no 00:04:30.066 PasswordAuthentication no 00:04:30.066 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:04:30.066 IdentitiesOnly yes 00:04:30.066 LogLevel FATAL 00:04:30.066 ForwardAgent yes 00:04:30.066 ForwardX11 yes 00:04:30.066 00:04:30.079 [Pipeline] withEnv 00:04:30.081 [Pipeline] { 00:04:30.095 [Pipeline] sh 00:04:30.377 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:04:30.378 source /etc/os-release 00:04:30.378 [[ -e /image.version ]] && img=$(< /image.version) 00:04:30.378 # Minimal, systemd-like check. 00:04:30.378 if [[ -e /.dockerenv ]]; then 00:04:30.378 # Clear garbage from the node's name: 00:04:30.378 # agt-er_autotest_547-896 -> autotest_547-896 00:04:30.378 # $HOSTNAME is the actual container id 00:04:30.378 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:04:30.378 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:04:30.378 # We can assume this is a mount from a host where container is running, 00:04:30.378 # so fetch its hostname to easily identify the target swarm worker. 00:04:30.378 container="$(< /etc/hostname) ($agent)" 00:04:30.378 else 00:04:30.378 # Fallback 00:04:30.378 container=$agent 00:04:30.378 fi 00:04:30.378 fi 00:04:30.378 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:04:30.378 00:04:30.647 [Pipeline] } 00:04:30.662 [Pipeline] // withEnv 00:04:30.669 [Pipeline] setCustomBuildProperty 00:04:30.682 [Pipeline] stage 00:04:30.684 [Pipeline] { (Tests) 00:04:30.697 [Pipeline] sh 00:04:30.978 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:04:31.251 [Pipeline] sh 00:04:31.536 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:04:31.812 [Pipeline] timeout 00:04:31.813 Timeout set to expire in 1 hr 30 min 00:04:31.814 [Pipeline] { 00:04:31.826 [Pipeline] sh 00:04:32.103 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:04:32.669 HEAD is now at a5e6ecf28 lib/reduce: Data copy logic in thin read operations 00:04:32.682 [Pipeline] sh 00:04:32.963 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:04:33.237 [Pipeline] sh 00:04:33.517 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:04:33.793 [Pipeline] sh 00:04:34.075 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:04:34.335 ++ readlink -f spdk_repo 00:04:34.335 + DIR_ROOT=/home/vagrant/spdk_repo 00:04:34.335 + [[ -n /home/vagrant/spdk_repo ]] 00:04:34.335 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:04:34.335 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:04:34.335 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:04:34.335 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:04:34.335 + [[ -d /home/vagrant/spdk_repo/output ]] 00:04:34.335 + [[ raid-vg-autotest == pkgdep-* ]] 00:04:34.335 + cd /home/vagrant/spdk_repo 00:04:34.335 + source /etc/os-release 00:04:34.335 ++ NAME='Fedora Linux' 00:04:34.335 ++ VERSION='39 (Cloud Edition)' 00:04:34.335 ++ ID=fedora 00:04:34.335 ++ VERSION_ID=39 00:04:34.335 ++ VERSION_CODENAME= 00:04:34.335 ++ PLATFORM_ID=platform:f39 00:04:34.335 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:04:34.335 ++ ANSI_COLOR='0;38;2;60;110;180' 00:04:34.335 ++ LOGO=fedora-logo-icon 00:04:34.335 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:04:34.335 ++ HOME_URL=https://fedoraproject.org/ 00:04:34.335 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:04:34.335 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:04:34.335 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:04:34.335 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:04:34.335 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:04:34.335 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:04:34.335 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:04:34.335 ++ SUPPORT_END=2024-11-12 00:04:34.335 ++ VARIANT='Cloud Edition' 00:04:34.335 ++ VARIANT_ID=cloud 00:04:34.335 + uname -a 00:04:34.335 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:04:34.335 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:34.904 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:34.904 Hugepages 00:04:34.904 node hugesize free / total 00:04:34.904 node0 1048576kB 0 / 0 00:04:34.904 node0 2048kB 0 / 0 00:04:34.904 00:04:34.904 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:34.904 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:34.904 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:34.904 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:34.904 + rm -f /tmp/spdk-ld-path 00:04:34.904 + source autorun-spdk.conf 00:04:34.904 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:34.904 ++ SPDK_RUN_ASAN=1 00:04:34.904 ++ SPDK_RUN_UBSAN=1 00:04:34.904 ++ SPDK_TEST_RAID=1 00:04:34.904 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:04:34.904 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:04:34.904 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:34.904 ++ RUN_NIGHTLY=1 00:04:34.904 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:04:34.904 + [[ -n '' ]] 00:04:34.904 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:04:34.904 + for M in /var/spdk/build-*-manifest.txt 00:04:34.904 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:04:34.904 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:34.904 + for M in /var/spdk/build-*-manifest.txt 00:04:34.904 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:04:34.904 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:34.904 + for M in /var/spdk/build-*-manifest.txt 00:04:34.904 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:04:34.904 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:34.904 ++ uname 00:04:34.904 + [[ Linux == \L\i\n\u\x ]] 00:04:34.904 + sudo dmesg -T 00:04:35.162 + sudo dmesg --clear 00:04:35.162 + dmesg_pid=6159 00:04:35.162 + [[ Fedora Linux == FreeBSD ]] 00:04:35.162 + sudo dmesg -Tw 00:04:35.162 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:35.162 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:35.162 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:04:35.162 + [[ -x /usr/src/fio-static/fio ]] 00:04:35.162 + export FIO_BIN=/usr/src/fio-static/fio 00:04:35.162 + FIO_BIN=/usr/src/fio-static/fio 00:04:35.162 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:04:35.162 + [[ ! -v VFIO_QEMU_BIN ]] 00:04:35.162 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:04:35.162 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:35.162 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:35.162 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:04:35.162 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:35.162 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:35.162 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:35.162 16:20:16 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:04:35.162 16:20:16 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:35.162 16:20:16 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:35.162 16:20:16 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:04:35.162 16:20:16 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:04:35.162 16:20:16 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:04:35.162 16:20:16 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_NATIVE_DPDK=v23.11 00:04:35.162 16:20:16 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:04:35.162 16:20:16 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:35.162 16:20:16 -- spdk_repo/autorun-spdk.conf@8 -- $ RUN_NIGHTLY=1 00:04:35.162 16:20:16 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:04:35.162 16:20:16 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:35.162 16:20:16 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:04:35.162 16:20:16 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:35.162 16:20:16 -- scripts/common.sh@15 -- $ shopt -s extglob 00:04:35.162 16:20:16 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:04:35.162 16:20:16 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:35.162 16:20:16 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:35.162 16:20:16 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.162 16:20:16 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.162 16:20:16 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.162 16:20:16 -- paths/export.sh@5 -- $ export PATH 00:04:35.162 16:20:16 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.162 16:20:16 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:04:35.421 16:20:16 -- common/autobuild_common.sh@493 -- $ date +%s 00:04:35.421 16:20:17 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733502017.XXXXXX 00:04:35.421 16:20:17 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733502017.iTrApQ 00:04:35.421 16:20:17 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:04:35.421 16:20:17 -- common/autobuild_common.sh@499 -- $ '[' -n v23.11 ']' 00:04:35.421 16:20:17 -- common/autobuild_common.sh@500 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:04:35.421 16:20:17 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:04:35.421 16:20:17 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:04:35.421 16:20:17 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:04:35.421 16:20:17 -- common/autobuild_common.sh@509 -- $ get_config_params 00:04:35.421 16:20:17 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:04:35.421 16:20:17 -- common/autotest_common.sh@10 -- $ set +x 00:04:35.421 16:20:17 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:04:35.421 16:20:17 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:04:35.421 16:20:17 -- pm/common@17 -- $ local monitor 00:04:35.421 16:20:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:35.421 16:20:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:35.421 16:20:17 -- pm/common@25 -- $ sleep 1 00:04:35.421 16:20:17 -- pm/common@21 -- $ date +%s 00:04:35.421 16:20:17 -- pm/common@21 -- $ date +%s 00:04:35.421 16:20:17 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733502017 00:04:35.421 16:20:17 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733502017 00:04:35.421 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733502017_collect-vmstat.pm.log 00:04:35.421 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733502017_collect-cpu-load.pm.log 00:04:36.359 16:20:18 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:04:36.359 16:20:18 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:04:36.359 16:20:18 -- spdk/autobuild.sh@12 -- $ umask 022 00:04:36.359 16:20:18 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:04:36.359 16:20:18 -- spdk/autobuild.sh@16 -- $ date -u 00:04:36.359 Fri Dec 6 04:20:18 PM UTC 2024 00:04:36.359 16:20:18 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:04:36.359 v25.01-pre-303-ga5e6ecf28 00:04:36.359 16:20:18 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:04:36.359 16:20:18 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:04:36.359 16:20:18 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:36.359 16:20:18 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:36.359 16:20:18 -- common/autotest_common.sh@10 -- $ set +x 00:04:36.359 ************************************ 00:04:36.359 START TEST asan 00:04:36.359 ************************************ 00:04:36.359 using asan 00:04:36.359 16:20:18 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:04:36.359 00:04:36.359 real 0m0.000s 00:04:36.359 user 0m0.000s 00:04:36.359 sys 0m0.000s 00:04:36.359 16:20:18 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:36.359 16:20:18 asan -- common/autotest_common.sh@10 -- $ set +x 00:04:36.359 ************************************ 00:04:36.359 END TEST asan 00:04:36.359 ************************************ 00:04:36.359 16:20:18 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:04:36.360 16:20:18 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:04:36.360 16:20:18 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:36.360 16:20:18 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:36.360 16:20:18 -- common/autotest_common.sh@10 -- $ set +x 00:04:36.360 ************************************ 00:04:36.360 START TEST ubsan 00:04:36.360 ************************************ 00:04:36.360 using ubsan 00:04:36.360 16:20:18 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:04:36.360 00:04:36.360 real 0m0.000s 00:04:36.360 user 0m0.000s 00:04:36.360 sys 0m0.000s 00:04:36.360 16:20:18 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:36.360 16:20:18 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:04:36.360 ************************************ 00:04:36.360 END TEST ubsan 00:04:36.360 ************************************ 00:04:36.621 16:20:18 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:04:36.621 16:20:18 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:04:36.621 16:20:18 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:04:36.621 16:20:18 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:04:36.621 16:20:18 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:36.621 16:20:18 -- common/autotest_common.sh@10 -- $ set +x 00:04:36.621 ************************************ 00:04:36.621 START TEST build_native_dpdk 00:04:36.621 ************************************ 00:04:36.621 16:20:18 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:04:36.621 16:20:18 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:04:36.621 16:20:18 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:04:36.621 16:20:18 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:04:36.621 16:20:18 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:04:36.621 16:20:18 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:04:36.621 16:20:18 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:04:36.621 16:20:18 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:04:36.621 16:20:18 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:04:36.621 16:20:18 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:04:36.621 16:20:18 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:04:36.621 16:20:18 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:04:36.621 16:20:18 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:04:36.621 16:20:18 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:04:36.621 16:20:18 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:04:36.621 16:20:18 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:04:36.621 16:20:18 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:04:36.621 16:20:18 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:04:36.621 16:20:18 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:04:36.621 16:20:18 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:04:36.621 16:20:18 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:04:36.621 eeb0605f11 version: 23.11.0 00:04:36.621 238778122a doc: update release notes for 23.11 00:04:36.621 46aa6b3cfc doc: fix description of RSS features 00:04:36.621 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:04:36.621 7e421ae345 devtools: support skipping forbid rule check 00:04:36.621 16:20:18 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:04:36.621 16:20:18 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:04:36.621 16:20:18 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:04:36.621 16:20:18 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:04:36.621 16:20:18 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:04:36.621 16:20:18 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:04:36.621 16:20:18 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:04:36.621 16:20:18 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:04:36.621 16:20:18 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:04:36.621 16:20:18 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:04:36.621 16:20:18 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:04:36.621 16:20:18 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:04:36.621 16:20:18 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:04:36.621 16:20:18 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:04:36.621 16:20:18 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /home/vagrant/spdk_repo/dpdk 00:04:36.621 16:20:18 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:04:36.621 16:20:18 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:04:36.621 16:20:18 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 21.11.0 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:04:36.621 16:20:18 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:04:36.621 patching file config/rte_config.h 00:04:36.621 Hunk #1 succeeded at 60 (offset 1 line). 00:04:36.621 16:20:18 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 23.11.0 24.07.0 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:04:36.621 16:20:18 build_native_dpdk -- common/autobuild_common.sh@184 -- $ patch -p1 00:04:36.621 patching file lib/pcapng/rte_pcapng.c 00:04:36.621 16:20:18 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 23.11.0 24.07.0 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:04:36.621 16:20:18 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:04:36.622 16:20:18 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:04:36.622 16:20:18 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:04:36.622 16:20:18 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:04:36.622 16:20:18 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:04:36.622 16:20:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:04:36.622 16:20:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.622 16:20:18 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:04:36.622 16:20:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:04:36.622 16:20:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:04:36.622 16:20:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:04:36.622 16:20:18 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:04:36.622 16:20:18 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:04:36.622 16:20:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:04:36.622 16:20:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:04:36.622 16:20:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:04:36.622 16:20:18 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:04:36.622 16:20:18 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:04:36.622 16:20:18 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:04:36.622 16:20:18 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:04:36.622 16:20:18 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:04:36.622 16:20:18 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:04:36.622 16:20:18 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:04:36.622 16:20:18 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:04:36.622 16:20:18 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:04:43.195 The Meson build system 00:04:43.195 Version: 1.5.0 00:04:43.195 Source dir: /home/vagrant/spdk_repo/dpdk 00:04:43.195 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:04:43.195 Build type: native build 00:04:43.195 Program cat found: YES (/usr/bin/cat) 00:04:43.195 Project name: DPDK 00:04:43.195 Project version: 23.11.0 00:04:43.195 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:43.195 C linker for the host machine: gcc ld.bfd 2.40-14 00:04:43.195 Host machine cpu family: x86_64 00:04:43.195 Host machine cpu: x86_64 00:04:43.195 Message: ## Building in Developer Mode ## 00:04:43.195 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:43.195 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:04:43.195 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:04:43.195 Program python3 found: YES (/usr/bin/python3) 00:04:43.195 Program cat found: YES (/usr/bin/cat) 00:04:43.195 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:04:43.195 Compiler for C supports arguments -march=native: YES 00:04:43.195 Checking for size of "void *" : 8 00:04:43.195 Checking for size of "void *" : 8 (cached) 00:04:43.195 Library m found: YES 00:04:43.195 Library numa found: YES 00:04:43.195 Has header "numaif.h" : YES 00:04:43.195 Library fdt found: NO 00:04:43.195 Library execinfo found: NO 00:04:43.195 Has header "execinfo.h" : YES 00:04:43.195 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:43.196 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:43.196 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:43.196 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:43.196 Run-time dependency openssl found: YES 3.1.1 00:04:43.196 Run-time dependency libpcap found: YES 1.10.4 00:04:43.196 Has header "pcap.h" with dependency libpcap: YES 00:04:43.196 Compiler for C supports arguments -Wcast-qual: YES 00:04:43.196 Compiler for C supports arguments -Wdeprecated: YES 00:04:43.196 Compiler for C supports arguments -Wformat: YES 00:04:43.196 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:43.196 Compiler for C supports arguments -Wformat-security: NO 00:04:43.196 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:43.196 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:43.196 Compiler for C supports arguments -Wnested-externs: YES 00:04:43.196 Compiler for C supports arguments -Wold-style-definition: YES 00:04:43.196 Compiler for C supports arguments -Wpointer-arith: YES 00:04:43.196 Compiler for C supports arguments -Wsign-compare: YES 00:04:43.196 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:43.196 Compiler for C supports arguments -Wundef: YES 00:04:43.196 Compiler for C supports arguments -Wwrite-strings: YES 00:04:43.196 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:43.196 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:43.196 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:43.196 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:43.196 Program objdump found: YES (/usr/bin/objdump) 00:04:43.196 Compiler for C supports arguments -mavx512f: YES 00:04:43.196 Checking if "AVX512 checking" compiles: YES 00:04:43.196 Fetching value of define "__SSE4_2__" : 1 00:04:43.196 Fetching value of define "__AES__" : 1 00:04:43.196 Fetching value of define "__AVX__" : 1 00:04:43.196 Fetching value of define "__AVX2__" : 1 00:04:43.196 Fetching value of define "__AVX512BW__" : 1 00:04:43.196 Fetching value of define "__AVX512CD__" : 1 00:04:43.196 Fetching value of define "__AVX512DQ__" : 1 00:04:43.196 Fetching value of define "__AVX512F__" : 1 00:04:43.196 Fetching value of define "__AVX512VL__" : 1 00:04:43.196 Fetching value of define "__PCLMUL__" : 1 00:04:43.196 Fetching value of define "__RDRND__" : 1 00:04:43.196 Fetching value of define "__RDSEED__" : 1 00:04:43.196 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:43.196 Fetching value of define "__znver1__" : (undefined) 00:04:43.196 Fetching value of define "__znver2__" : (undefined) 00:04:43.196 Fetching value of define "__znver3__" : (undefined) 00:04:43.196 Fetching value of define "__znver4__" : (undefined) 00:04:43.196 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:43.196 Message: lib/log: Defining dependency "log" 00:04:43.196 Message: lib/kvargs: Defining dependency "kvargs" 00:04:43.196 Message: lib/telemetry: Defining dependency "telemetry" 00:04:43.196 Checking for function "getentropy" : NO 00:04:43.196 Message: lib/eal: Defining dependency "eal" 00:04:43.196 Message: lib/ring: Defining dependency "ring" 00:04:43.196 Message: lib/rcu: Defining dependency "rcu" 00:04:43.196 Message: lib/mempool: Defining dependency "mempool" 00:04:43.196 Message: lib/mbuf: Defining dependency "mbuf" 00:04:43.196 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:43.196 Fetching value of define "__AVX512F__" : 1 (cached) 00:04:43.196 Fetching value of define "__AVX512BW__" : 1 (cached) 00:04:43.196 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:04:43.196 Fetching value of define "__AVX512VL__" : 1 (cached) 00:04:43.196 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:04:43.196 Compiler for C supports arguments -mpclmul: YES 00:04:43.196 Compiler for C supports arguments -maes: YES 00:04:43.196 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:43.196 Compiler for C supports arguments -mavx512bw: YES 00:04:43.196 Compiler for C supports arguments -mavx512dq: YES 00:04:43.196 Compiler for C supports arguments -mavx512vl: YES 00:04:43.196 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:43.196 Compiler for C supports arguments -mavx2: YES 00:04:43.196 Compiler for C supports arguments -mavx: YES 00:04:43.196 Message: lib/net: Defining dependency "net" 00:04:43.196 Message: lib/meter: Defining dependency "meter" 00:04:43.196 Message: lib/ethdev: Defining dependency "ethdev" 00:04:43.196 Message: lib/pci: Defining dependency "pci" 00:04:43.196 Message: lib/cmdline: Defining dependency "cmdline" 00:04:43.196 Message: lib/metrics: Defining dependency "metrics" 00:04:43.196 Message: lib/hash: Defining dependency "hash" 00:04:43.196 Message: lib/timer: Defining dependency "timer" 00:04:43.196 Fetching value of define "__AVX512F__" : 1 (cached) 00:04:43.196 Fetching value of define "__AVX512VL__" : 1 (cached) 00:04:43.196 Fetching value of define "__AVX512CD__" : 1 (cached) 00:04:43.196 Fetching value of define "__AVX512BW__" : 1 (cached) 00:04:43.196 Message: lib/acl: Defining dependency "acl" 00:04:43.196 Message: lib/bbdev: Defining dependency "bbdev" 00:04:43.196 Message: lib/bitratestats: Defining dependency "bitratestats" 00:04:43.196 Run-time dependency libelf found: YES 0.191 00:04:43.196 Message: lib/bpf: Defining dependency "bpf" 00:04:43.196 Message: lib/cfgfile: Defining dependency "cfgfile" 00:04:43.196 Message: lib/compressdev: Defining dependency "compressdev" 00:04:43.196 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:43.196 Message: lib/distributor: Defining dependency "distributor" 00:04:43.196 Message: lib/dmadev: Defining dependency "dmadev" 00:04:43.196 Message: lib/efd: Defining dependency "efd" 00:04:43.196 Message: lib/eventdev: Defining dependency "eventdev" 00:04:43.196 Message: lib/dispatcher: Defining dependency "dispatcher" 00:04:43.196 Message: lib/gpudev: Defining dependency "gpudev" 00:04:43.196 Message: lib/gro: Defining dependency "gro" 00:04:43.196 Message: lib/gso: Defining dependency "gso" 00:04:43.196 Message: lib/ip_frag: Defining dependency "ip_frag" 00:04:43.196 Message: lib/jobstats: Defining dependency "jobstats" 00:04:43.196 Message: lib/latencystats: Defining dependency "latencystats" 00:04:43.196 Message: lib/lpm: Defining dependency "lpm" 00:04:43.196 Fetching value of define "__AVX512F__" : 1 (cached) 00:04:43.196 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:04:43.196 Fetching value of define "__AVX512IFMA__" : (undefined) 00:04:43.196 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:04:43.196 Message: lib/member: Defining dependency "member" 00:04:43.196 Message: lib/pcapng: Defining dependency "pcapng" 00:04:43.196 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:43.196 Message: lib/power: Defining dependency "power" 00:04:43.196 Message: lib/rawdev: Defining dependency "rawdev" 00:04:43.196 Message: lib/regexdev: Defining dependency "regexdev" 00:04:43.196 Message: lib/mldev: Defining dependency "mldev" 00:04:43.196 Message: lib/rib: Defining dependency "rib" 00:04:43.196 Message: lib/reorder: Defining dependency "reorder" 00:04:43.196 Message: lib/sched: Defining dependency "sched" 00:04:43.196 Message: lib/security: Defining dependency "security" 00:04:43.196 Message: lib/stack: Defining dependency "stack" 00:04:43.196 Has header "linux/userfaultfd.h" : YES 00:04:43.196 Has header "linux/vduse.h" : YES 00:04:43.196 Message: lib/vhost: Defining dependency "vhost" 00:04:43.196 Message: lib/ipsec: Defining dependency "ipsec" 00:04:43.196 Message: lib/pdcp: Defining dependency "pdcp" 00:04:43.196 Fetching value of define "__AVX512F__" : 1 (cached) 00:04:43.196 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:04:43.196 Fetching value of define "__AVX512BW__" : 1 (cached) 00:04:43.196 Message: lib/fib: Defining dependency "fib" 00:04:43.196 Message: lib/port: Defining dependency "port" 00:04:43.196 Message: lib/pdump: Defining dependency "pdump" 00:04:43.196 Message: lib/table: Defining dependency "table" 00:04:43.196 Message: lib/pipeline: Defining dependency "pipeline" 00:04:43.196 Message: lib/graph: Defining dependency "graph" 00:04:43.196 Message: lib/node: Defining dependency "node" 00:04:43.196 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:43.196 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:43.196 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:44.135 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:44.135 Compiler for C supports arguments -Wno-sign-compare: YES 00:04:44.135 Compiler for C supports arguments -Wno-unused-value: YES 00:04:44.135 Compiler for C supports arguments -Wno-format: YES 00:04:44.135 Compiler for C supports arguments -Wno-format-security: YES 00:04:44.135 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:04:44.135 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:04:44.135 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:04:44.135 Compiler for C supports arguments -Wno-unused-parameter: YES 00:04:44.135 Fetching value of define "__AVX512F__" : 1 (cached) 00:04:44.135 Fetching value of define "__AVX512BW__" : 1 (cached) 00:04:44.135 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:44.135 Compiler for C supports arguments -mavx512bw: YES (cached) 00:04:44.135 Compiler for C supports arguments -march=skylake-avx512: YES 00:04:44.135 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:04:44.135 Has header "sys/epoll.h" : YES 00:04:44.135 Program doxygen found: YES (/usr/local/bin/doxygen) 00:04:44.135 Configuring doxy-api-html.conf using configuration 00:04:44.135 Configuring doxy-api-man.conf using configuration 00:04:44.135 Program mandb found: YES (/usr/bin/mandb) 00:04:44.135 Program sphinx-build found: NO 00:04:44.135 Configuring rte_build_config.h using configuration 00:04:44.135 Message: 00:04:44.135 ================= 00:04:44.135 Applications Enabled 00:04:44.135 ================= 00:04:44.135 00:04:44.135 apps: 00:04:44.135 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:04:44.135 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:04:44.135 test-pmd, test-regex, test-sad, test-security-perf, 00:04:44.135 00:04:44.135 Message: 00:04:44.135 ================= 00:04:44.135 Libraries Enabled 00:04:44.135 ================= 00:04:44.135 00:04:44.135 libs: 00:04:44.135 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:44.135 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:04:44.135 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:04:44.135 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:04:44.135 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:04:44.135 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:04:44.135 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:04:44.135 00:04:44.135 00:04:44.135 Message: 00:04:44.135 =============== 00:04:44.135 Drivers Enabled 00:04:44.135 =============== 00:04:44.135 00:04:44.135 common: 00:04:44.135 00:04:44.135 bus: 00:04:44.135 pci, vdev, 00:04:44.135 mempool: 00:04:44.135 ring, 00:04:44.135 dma: 00:04:44.135 00:04:44.135 net: 00:04:44.135 i40e, 00:04:44.135 raw: 00:04:44.135 00:04:44.135 crypto: 00:04:44.135 00:04:44.135 compress: 00:04:44.135 00:04:44.135 regex: 00:04:44.135 00:04:44.135 ml: 00:04:44.135 00:04:44.135 vdpa: 00:04:44.135 00:04:44.135 event: 00:04:44.135 00:04:44.135 baseband: 00:04:44.135 00:04:44.135 gpu: 00:04:44.135 00:04:44.135 00:04:44.135 Message: 00:04:44.135 ================= 00:04:44.135 Content Skipped 00:04:44.135 ================= 00:04:44.135 00:04:44.135 apps: 00:04:44.135 00:04:44.135 libs: 00:04:44.135 00:04:44.135 drivers: 00:04:44.135 common/cpt: not in enabled drivers build config 00:04:44.135 common/dpaax: not in enabled drivers build config 00:04:44.135 common/iavf: not in enabled drivers build config 00:04:44.135 common/idpf: not in enabled drivers build config 00:04:44.135 common/mvep: not in enabled drivers build config 00:04:44.135 common/octeontx: not in enabled drivers build config 00:04:44.135 bus/auxiliary: not in enabled drivers build config 00:04:44.135 bus/cdx: not in enabled drivers build config 00:04:44.135 bus/dpaa: not in enabled drivers build config 00:04:44.135 bus/fslmc: not in enabled drivers build config 00:04:44.135 bus/ifpga: not in enabled drivers build config 00:04:44.135 bus/platform: not in enabled drivers build config 00:04:44.135 bus/vmbus: not in enabled drivers build config 00:04:44.135 common/cnxk: not in enabled drivers build config 00:04:44.135 common/mlx5: not in enabled drivers build config 00:04:44.135 common/nfp: not in enabled drivers build config 00:04:44.135 common/qat: not in enabled drivers build config 00:04:44.135 common/sfc_efx: not in enabled drivers build config 00:04:44.135 mempool/bucket: not in enabled drivers build config 00:04:44.135 mempool/cnxk: not in enabled drivers build config 00:04:44.135 mempool/dpaa: not in enabled drivers build config 00:04:44.135 mempool/dpaa2: not in enabled drivers build config 00:04:44.135 mempool/octeontx: not in enabled drivers build config 00:04:44.135 mempool/stack: not in enabled drivers build config 00:04:44.135 dma/cnxk: not in enabled drivers build config 00:04:44.135 dma/dpaa: not in enabled drivers build config 00:04:44.135 dma/dpaa2: not in enabled drivers build config 00:04:44.135 dma/hisilicon: not in enabled drivers build config 00:04:44.135 dma/idxd: not in enabled drivers build config 00:04:44.135 dma/ioat: not in enabled drivers build config 00:04:44.135 dma/skeleton: not in enabled drivers build config 00:04:44.135 net/af_packet: not in enabled drivers build config 00:04:44.135 net/af_xdp: not in enabled drivers build config 00:04:44.135 net/ark: not in enabled drivers build config 00:04:44.135 net/atlantic: not in enabled drivers build config 00:04:44.135 net/avp: not in enabled drivers build config 00:04:44.135 net/axgbe: not in enabled drivers build config 00:04:44.135 net/bnx2x: not in enabled drivers build config 00:04:44.135 net/bnxt: not in enabled drivers build config 00:04:44.135 net/bonding: not in enabled drivers build config 00:04:44.135 net/cnxk: not in enabled drivers build config 00:04:44.135 net/cpfl: not in enabled drivers build config 00:04:44.135 net/cxgbe: not in enabled drivers build config 00:04:44.135 net/dpaa: not in enabled drivers build config 00:04:44.136 net/dpaa2: not in enabled drivers build config 00:04:44.136 net/e1000: not in enabled drivers build config 00:04:44.136 net/ena: not in enabled drivers build config 00:04:44.136 net/enetc: not in enabled drivers build config 00:04:44.136 net/enetfec: not in enabled drivers build config 00:04:44.136 net/enic: not in enabled drivers build config 00:04:44.136 net/failsafe: not in enabled drivers build config 00:04:44.136 net/fm10k: not in enabled drivers build config 00:04:44.136 net/gve: not in enabled drivers build config 00:04:44.136 net/hinic: not in enabled drivers build config 00:04:44.136 net/hns3: not in enabled drivers build config 00:04:44.136 net/iavf: not in enabled drivers build config 00:04:44.136 net/ice: not in enabled drivers build config 00:04:44.136 net/idpf: not in enabled drivers build config 00:04:44.136 net/igc: not in enabled drivers build config 00:04:44.136 net/ionic: not in enabled drivers build config 00:04:44.136 net/ipn3ke: not in enabled drivers build config 00:04:44.136 net/ixgbe: not in enabled drivers build config 00:04:44.136 net/mana: not in enabled drivers build config 00:04:44.136 net/memif: not in enabled drivers build config 00:04:44.136 net/mlx4: not in enabled drivers build config 00:04:44.136 net/mlx5: not in enabled drivers build config 00:04:44.136 net/mvneta: not in enabled drivers build config 00:04:44.136 net/mvpp2: not in enabled drivers build config 00:04:44.136 net/netvsc: not in enabled drivers build config 00:04:44.136 net/nfb: not in enabled drivers build config 00:04:44.136 net/nfp: not in enabled drivers build config 00:04:44.136 net/ngbe: not in enabled drivers build config 00:04:44.136 net/null: not in enabled drivers build config 00:04:44.136 net/octeontx: not in enabled drivers build config 00:04:44.136 net/octeon_ep: not in enabled drivers build config 00:04:44.136 net/pcap: not in enabled drivers build config 00:04:44.136 net/pfe: not in enabled drivers build config 00:04:44.136 net/qede: not in enabled drivers build config 00:04:44.136 net/ring: not in enabled drivers build config 00:04:44.136 net/sfc: not in enabled drivers build config 00:04:44.136 net/softnic: not in enabled drivers build config 00:04:44.136 net/tap: not in enabled drivers build config 00:04:44.136 net/thunderx: not in enabled drivers build config 00:04:44.136 net/txgbe: not in enabled drivers build config 00:04:44.136 net/vdev_netvsc: not in enabled drivers build config 00:04:44.136 net/vhost: not in enabled drivers build config 00:04:44.136 net/virtio: not in enabled drivers build config 00:04:44.136 net/vmxnet3: not in enabled drivers build config 00:04:44.136 raw/cnxk_bphy: not in enabled drivers build config 00:04:44.136 raw/cnxk_gpio: not in enabled drivers build config 00:04:44.136 raw/dpaa2_cmdif: not in enabled drivers build config 00:04:44.136 raw/ifpga: not in enabled drivers build config 00:04:44.136 raw/ntb: not in enabled drivers build config 00:04:44.136 raw/skeleton: not in enabled drivers build config 00:04:44.136 crypto/armv8: not in enabled drivers build config 00:04:44.136 crypto/bcmfs: not in enabled drivers build config 00:04:44.136 crypto/caam_jr: not in enabled drivers build config 00:04:44.136 crypto/ccp: not in enabled drivers build config 00:04:44.136 crypto/cnxk: not in enabled drivers build config 00:04:44.136 crypto/dpaa_sec: not in enabled drivers build config 00:04:44.136 crypto/dpaa2_sec: not in enabled drivers build config 00:04:44.136 crypto/ipsec_mb: not in enabled drivers build config 00:04:44.136 crypto/mlx5: not in enabled drivers build config 00:04:44.136 crypto/mvsam: not in enabled drivers build config 00:04:44.136 crypto/nitrox: not in enabled drivers build config 00:04:44.136 crypto/null: not in enabled drivers build config 00:04:44.136 crypto/octeontx: not in enabled drivers build config 00:04:44.136 crypto/openssl: not in enabled drivers build config 00:04:44.136 crypto/scheduler: not in enabled drivers build config 00:04:44.136 crypto/uadk: not in enabled drivers build config 00:04:44.136 crypto/virtio: not in enabled drivers build config 00:04:44.136 compress/isal: not in enabled drivers build config 00:04:44.136 compress/mlx5: not in enabled drivers build config 00:04:44.136 compress/octeontx: not in enabled drivers build config 00:04:44.136 compress/zlib: not in enabled drivers build config 00:04:44.136 regex/mlx5: not in enabled drivers build config 00:04:44.136 regex/cn9k: not in enabled drivers build config 00:04:44.136 ml/cnxk: not in enabled drivers build config 00:04:44.136 vdpa/ifc: not in enabled drivers build config 00:04:44.136 vdpa/mlx5: not in enabled drivers build config 00:04:44.136 vdpa/nfp: not in enabled drivers build config 00:04:44.136 vdpa/sfc: not in enabled drivers build config 00:04:44.136 event/cnxk: not in enabled drivers build config 00:04:44.136 event/dlb2: not in enabled drivers build config 00:04:44.136 event/dpaa: not in enabled drivers build config 00:04:44.136 event/dpaa2: not in enabled drivers build config 00:04:44.136 event/dsw: not in enabled drivers build config 00:04:44.136 event/opdl: not in enabled drivers build config 00:04:44.136 event/skeleton: not in enabled drivers build config 00:04:44.136 event/sw: not in enabled drivers build config 00:04:44.136 event/octeontx: not in enabled drivers build config 00:04:44.136 baseband/acc: not in enabled drivers build config 00:04:44.136 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:04:44.136 baseband/fpga_lte_fec: not in enabled drivers build config 00:04:44.136 baseband/la12xx: not in enabled drivers build config 00:04:44.136 baseband/null: not in enabled drivers build config 00:04:44.136 baseband/turbo_sw: not in enabled drivers build config 00:04:44.136 gpu/cuda: not in enabled drivers build config 00:04:44.136 00:04:44.136 00:04:44.136 Build targets in project: 217 00:04:44.136 00:04:44.136 DPDK 23.11.0 00:04:44.136 00:04:44.136 User defined options 00:04:44.136 libdir : lib 00:04:44.136 prefix : /home/vagrant/spdk_repo/dpdk/build 00:04:44.136 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:04:44.136 c_link_args : 00:04:44.136 enable_docs : false 00:04:44.136 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:04:44.136 enable_kmods : false 00:04:44.136 machine : native 00:04:44.136 tests : false 00:04:44.136 00:04:44.136 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:44.136 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:04:44.136 16:20:25 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:04:44.136 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:04:44.136 [1/707] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:44.136 [2/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:44.136 [3/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:44.136 [4/707] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:44.136 [5/707] Linking static target lib/librte_kvargs.a 00:04:44.136 [6/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:44.136 [7/707] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:44.136 [8/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:44.395 [9/707] Linking static target lib/librte_log.a 00:04:44.395 [10/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:44.395 [11/707] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:44.395 [12/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:44.395 [13/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:44.395 [14/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:44.653 [15/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:44.653 [16/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:44.653 [17/707] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:44.653 [18/707] Linking target lib/librte_log.so.24.0 00:04:44.910 [19/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:44.910 [20/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:44.910 [21/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:44.910 [22/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:44.910 [23/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:44.910 [24/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:44.910 [25/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:44.910 [26/707] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:04:45.167 [27/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:45.167 [28/707] Linking target lib/librte_kvargs.so.24.0 00:04:45.167 [29/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:45.167 [30/707] Linking static target lib/librte_telemetry.a 00:04:45.167 [31/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:45.167 [32/707] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:04:45.167 [33/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:45.168 [34/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:45.426 [35/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:45.426 [36/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:45.426 [37/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:45.426 [38/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:45.426 [39/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:45.426 [40/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:45.426 [41/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:45.426 [42/707] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:45.426 [43/707] Linking target lib/librte_telemetry.so.24.0 00:04:45.426 [44/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:45.684 [45/707] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:04:45.684 [46/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:45.684 [47/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:45.942 [48/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:45.942 [49/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:45.942 [50/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:45.942 [51/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:45.942 [52/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:45.942 [53/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:45.942 [54/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:45.942 [55/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:46.201 [56/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:46.201 [57/707] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:46.201 [58/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:46.201 [59/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:46.201 [60/707] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:46.201 [61/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:46.201 [62/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:46.201 [63/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:46.201 [64/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:46.201 [65/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:46.201 [66/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:46.460 [67/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:46.460 [68/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:46.460 [69/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:46.460 [70/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:46.791 [71/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:46.791 [72/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:46.791 [73/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:46.791 [74/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:46.791 [75/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:46.791 [76/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:46.791 [77/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:46.791 [78/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:47.067 [79/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:47.067 [80/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:47.067 [81/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:47.067 [82/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:47.067 [83/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:47.067 [84/707] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:47.067 [85/707] Linking static target lib/librte_ring.a 00:04:47.327 [86/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:47.327 [87/707] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:47.327 [88/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:47.327 [89/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:47.327 [90/707] Linking static target lib/librte_eal.a 00:04:47.327 [91/707] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:47.587 [92/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:47.587 [93/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:47.587 [94/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:47.587 [95/707] Linking static target lib/librte_mempool.a 00:04:47.847 [96/707] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:47.847 [97/707] Linking static target lib/librte_rcu.a 00:04:47.847 [98/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:47.847 [99/707] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:47.847 [100/707] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:47.847 [101/707] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:47.847 [102/707] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:47.847 [103/707] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:48.106 [104/707] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:48.106 [105/707] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:48.106 [106/707] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:48.106 [107/707] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:48.106 [108/707] Linking static target lib/librte_net.a 00:04:48.366 [109/707] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:48.366 [110/707] Linking static target lib/librte_meter.a 00:04:48.366 [111/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:48.366 [112/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:48.366 [113/707] Linking static target lib/librte_mbuf.a 00:04:48.366 [114/707] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:48.366 [115/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:48.366 [116/707] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:48.625 [117/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:48.625 [118/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:48.884 [119/707] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:48.884 [120/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:48.884 [121/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:49.453 [122/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:49.453 [123/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:49.453 [124/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:49.453 [125/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:49.453 [126/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:49.453 [127/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:49.453 [128/707] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:49.453 [129/707] Linking static target lib/librte_pci.a 00:04:49.453 [130/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:49.712 [131/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:49.713 [132/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:49.713 [133/707] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:49.713 [134/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:49.713 [135/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:49.713 [136/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:49.713 [137/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:49.713 [138/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:49.713 [139/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:49.713 [140/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:49.972 [141/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:49.972 [142/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:49.972 [143/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:49.972 [144/707] Linking static target lib/librte_cmdline.a 00:04:49.972 [145/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:50.231 [146/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:04:50.231 [147/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:04:50.231 [148/707] Linking static target lib/librte_metrics.a 00:04:50.231 [149/707] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:50.491 [150/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:50.491 [151/707] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:04:50.750 [152/707] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:50.750 [153/707] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:50.750 [154/707] Linking static target lib/librte_timer.a 00:04:50.750 [155/707] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:51.008 [156/707] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:04:51.008 [157/707] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:51.266 [158/707] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:04:51.266 [159/707] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:04:51.266 [160/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:04:51.525 [161/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:04:51.784 [162/707] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:04:51.784 [163/707] Linking static target lib/librte_bitratestats.a 00:04:51.784 [164/707] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:04:51.784 [165/707] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:04:52.043 [166/707] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:04:52.043 [167/707] Linking static target lib/librte_bbdev.a 00:04:52.043 [168/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:04:52.303 [169/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:04:52.563 [170/707] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:52.563 [171/707] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:52.563 [172/707] Linking static target lib/librte_hash.a 00:04:52.563 [173/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:04:52.563 [174/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:52.563 [175/707] Linking static target lib/librte_ethdev.a 00:04:52.563 [176/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:04:52.823 [177/707] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:04:52.823 [178/707] Linking static target lib/acl/libavx2_tmp.a 00:04:52.823 [179/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:04:52.823 [180/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:04:52.823 [181/707] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:53.084 [182/707] Linking target lib/librte_eal.so.24.0 00:04:53.084 [183/707] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:53.084 [184/707] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:04:53.084 [185/707] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:04:53.084 [186/707] Linking target lib/librte_ring.so.24.0 00:04:53.084 [187/707] Linking target lib/librte_meter.so.24.0 00:04:53.084 [188/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:04:53.344 [189/707] Linking target lib/librte_pci.so.24.0 00:04:53.344 [190/707] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:04:53.344 [191/707] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:04:53.344 [192/707] Linking target lib/librte_rcu.so.24.0 00:04:53.344 [193/707] Linking target lib/librte_mempool.so.24.0 00:04:53.344 [194/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:04:53.344 [195/707] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:04:53.344 [196/707] Linking static target lib/librte_cfgfile.a 00:04:53.344 [197/707] Linking target lib/librte_timer.so.24.0 00:04:53.344 [198/707] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:04:53.344 [199/707] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:04:53.344 [200/707] Linking target lib/librte_mbuf.so.24.0 00:04:53.604 [201/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:53.604 [202/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:04:53.604 [203/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:53.604 [204/707] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:04:53.604 [205/707] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:04:53.604 [206/707] Linking target lib/librte_net.so.24.0 00:04:53.604 [207/707] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:04:53.604 [208/707] Linking target lib/librte_bbdev.so.24.0 00:04:53.604 [209/707] Linking target lib/librte_cfgfile.so.24.0 00:04:53.863 [210/707] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:04:53.863 [211/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:04:53.863 [212/707] Linking target lib/librte_cmdline.so.24.0 00:04:53.863 [213/707] Linking target lib/librte_hash.so.24.0 00:04:53.863 [214/707] Linking static target lib/librte_bpf.a 00:04:53.863 [215/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:53.863 [216/707] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:04:53.863 [217/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:04:53.863 [218/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:54.122 [219/707] Linking static target lib/librte_acl.a 00:04:54.122 [220/707] Linking static target lib/librte_compressdev.a 00:04:54.122 [221/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:54.122 [222/707] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:54.381 [223/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:04:54.381 [224/707] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:04:54.381 [225/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:04:54.381 [226/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:04:54.381 [227/707] Linking static target lib/librte_distributor.a 00:04:54.381 [228/707] Linking target lib/librte_acl.so.24.0 00:04:54.382 [229/707] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:54.382 [230/707] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:04:54.382 [231/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:54.382 [232/707] Linking target lib/librte_compressdev.so.24.0 00:04:54.641 [233/707] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:04:54.641 [234/707] Linking target lib/librte_distributor.so.24.0 00:04:54.641 [235/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:04:54.900 [236/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:54.900 [237/707] Linking static target lib/librte_dmadev.a 00:04:55.159 [238/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:04:55.159 [239/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:04:55.159 [240/707] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:55.159 [241/707] Linking target lib/librte_dmadev.so.24.0 00:04:55.419 [242/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:04:55.419 [243/707] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:04:55.419 [244/707] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:04:55.419 [245/707] Linking static target lib/librte_efd.a 00:04:55.678 [246/707] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:04:55.678 [247/707] Linking target lib/librte_efd.so.24.0 00:04:55.678 [248/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:55.678 [249/707] Linking static target lib/librte_cryptodev.a 00:04:55.678 [250/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:04:55.938 [251/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:04:55.938 [252/707] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:04:55.938 [253/707] Linking static target lib/librte_dispatcher.a 00:04:56.198 [254/707] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:04:56.198 [255/707] Linking static target lib/librte_gpudev.a 00:04:56.198 [256/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:04:56.457 [257/707] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:04:56.457 [258/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:04:56.457 [259/707] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:04:56.457 [260/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:04:56.769 [261/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:04:56.769 [262/707] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:04:56.769 [263/707] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:57.034 [264/707] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:04:57.034 [265/707] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:57.034 [266/707] Linking target lib/librte_cryptodev.so.24.0 00:04:57.034 [267/707] Linking target lib/librte_gpudev.so.24.0 00:04:57.034 [268/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:04:57.034 [269/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:04:57.034 [270/707] Linking static target lib/librte_gro.a 00:04:57.034 [271/707] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:57.034 [272/707] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:04:57.034 [273/707] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:04:57.034 [274/707] Linking target lib/librte_ethdev.so.24.0 00:04:57.034 [275/707] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:04:57.293 [276/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:04:57.293 [277/707] Linking static target lib/librte_eventdev.a 00:04:57.293 [278/707] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:04:57.293 [279/707] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:04:57.293 [280/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:04:57.293 [281/707] Linking target lib/librte_metrics.so.24.0 00:04:57.293 [282/707] Linking target lib/librte_bpf.so.24.0 00:04:57.293 [283/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:04:57.293 [284/707] Linking target lib/librte_gro.so.24.0 00:04:57.293 [285/707] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:04:57.293 [286/707] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:04:57.293 [287/707] Linking static target lib/librte_gso.a 00:04:57.552 [288/707] Linking target lib/librte_bitratestats.so.24.0 00:04:57.552 [289/707] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:04:57.552 [290/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:04:57.552 [291/707] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:04:57.552 [292/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:04:57.552 [293/707] Linking target lib/librte_gso.so.24.0 00:04:57.811 [294/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:04:57.811 [295/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:04:57.811 [296/707] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:04:57.811 [297/707] Linking static target lib/librte_jobstats.a 00:04:57.811 [298/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:04:58.070 [299/707] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:04:58.070 [300/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:04:58.070 [301/707] Linking static target lib/librte_latencystats.a 00:04:58.070 [302/707] Linking static target lib/librte_ip_frag.a 00:04:58.070 [303/707] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:04:58.070 [304/707] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:04:58.070 [305/707] Linking static target lib/member/libsketch_avx512_tmp.a 00:04:58.070 [306/707] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:04:58.070 [307/707] Linking target lib/librte_jobstats.so.24.0 00:04:58.329 [308/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:04:58.329 [309/707] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:04:58.329 [310/707] Linking target lib/librte_latencystats.so.24.0 00:04:58.329 [311/707] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:58.329 [312/707] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:04:58.329 [313/707] Linking target lib/librte_ip_frag.so.24.0 00:04:58.329 [314/707] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:58.329 [315/707] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:04:58.588 [316/707] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:58.588 [317/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:04:58.588 [318/707] Linking static target lib/librte_lpm.a 00:04:58.848 [319/707] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:04:58.848 [320/707] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:58.848 [321/707] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:58.848 [322/707] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:04:58.848 [323/707] Linking static target lib/librte_pcapng.a 00:04:58.848 [324/707] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:04:58.848 [325/707] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:58.848 [326/707] Linking target lib/librte_lpm.so.24.0 00:04:59.107 [327/707] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:59.107 [328/707] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:04:59.107 [329/707] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:04:59.107 [330/707] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:59.107 [331/707] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:59.107 [332/707] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:04:59.107 [333/707] Linking target lib/librte_eventdev.so.24.0 00:04:59.107 [334/707] Linking target lib/librte_pcapng.so.24.0 00:04:59.107 [335/707] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:59.365 [336/707] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:04:59.365 [337/707] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:04:59.365 [338/707] Linking target lib/librte_dispatcher.so.24.0 00:04:59.365 [339/707] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:59.623 [340/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:04:59.623 [341/707] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:59.623 [342/707] Linking static target lib/librte_power.a 00:04:59.623 [343/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:04:59.623 [344/707] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:04:59.623 [345/707] Linking static target lib/librte_regexdev.a 00:04:59.623 [346/707] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:04:59.623 [347/707] Linking static target lib/librte_rawdev.a 00:04:59.881 [348/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:04:59.881 [349/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:04:59.881 [350/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:04:59.881 [351/707] Linking static target lib/librte_mldev.a 00:04:59.881 [352/707] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:05:00.140 [353/707] Linking static target lib/librte_member.a 00:05:00.140 [354/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:05:00.140 [355/707] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:00.140 [356/707] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:00.140 [357/707] Linking target lib/librte_power.so.24.0 00:05:00.140 [358/707] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:05:00.140 [359/707] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:05:00.140 [360/707] Linking target lib/librte_rawdev.so.24.0 00:05:00.398 [361/707] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:00.398 [362/707] Linking static target lib/librte_reorder.a 00:05:00.398 [363/707] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:05:00.398 [364/707] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:00.398 [365/707] Linking target lib/librte_member.so.24.0 00:05:00.398 [366/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:05:00.398 [367/707] Linking static target lib/librte_rib.a 00:05:00.398 [368/707] Linking target lib/librte_regexdev.so.24.0 00:05:00.657 [369/707] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:05:00.657 [370/707] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:00.657 [371/707] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:00.657 [372/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:05:00.657 [373/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:05:00.657 [374/707] Linking target lib/librte_reorder.so.24.0 00:05:00.657 [375/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:05:00.657 [376/707] Linking static target lib/librte_stack.a 00:05:00.657 [377/707] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:05:00.915 [378/707] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:05:00.915 [379/707] Linking target lib/librte_rib.so.24.0 00:05:00.915 [380/707] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:05:00.915 [381/707] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:00.915 [382/707] Linking static target lib/librte_security.a 00:05:00.915 [383/707] Linking target lib/librte_stack.so.24.0 00:05:00.915 [384/707] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:05:01.173 [385/707] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:01.173 [386/707] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:01.173 [387/707] Linking target lib/librte_mldev.so.24.0 00:05:01.173 [388/707] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:01.432 [389/707] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:01.432 [390/707] Linking target lib/librte_security.so.24.0 00:05:01.432 [391/707] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:01.432 [392/707] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:05:01.432 [393/707] Linking static target lib/librte_sched.a 00:05:01.432 [394/707] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:05:01.690 [395/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:01.690 [396/707] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:01.949 [397/707] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:05:01.949 [398/707] Linking target lib/librte_sched.so.24.0 00:05:01.949 [399/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:01.949 [400/707] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:05:02.207 [401/707] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:05:02.207 [402/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:05:02.207 [403/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:02.465 [404/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:05:02.465 [405/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:05:02.465 [406/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:05:02.465 [407/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:05:02.725 [408/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:05:02.725 [409/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:05:02.725 [410/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:05:02.985 [411/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:05:02.985 [412/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:05:02.985 [413/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:05:02.985 [414/707] Linking static target lib/librte_ipsec.a 00:05:02.985 [415/707] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:05:03.244 [416/707] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:05:03.244 [417/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:05:03.244 [418/707] Linking target lib/librte_ipsec.so.24.0 00:05:03.503 [419/707] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:05:03.503 [420/707] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:05:03.804 [421/707] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:05:03.804 [422/707] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:05:03.804 [423/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:05:03.804 [424/707] Linking static target lib/librte_fib.a 00:05:03.804 [425/707] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:05:04.075 [426/707] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:05:04.075 [427/707] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:05:04.075 [428/707] Linking target lib/librte_fib.so.24.0 00:05:04.075 [429/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:05:04.075 [430/707] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:05:04.075 [431/707] Linking static target lib/librte_pdcp.a 00:05:04.075 [432/707] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:05:04.334 [433/707] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:05:04.334 [434/707] Linking target lib/librte_pdcp.so.24.0 00:05:04.594 [435/707] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:05:04.594 [436/707] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:05:04.853 [437/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:05:04.853 [438/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:05:04.853 [439/707] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:05:04.853 [440/707] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:05:05.113 [441/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:05:05.113 [442/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:05:05.373 [443/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:05:05.373 [444/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:05:05.373 [445/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:05:05.373 [446/707] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:05:05.373 [447/707] Linking static target lib/librte_port.a 00:05:05.373 [448/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:05:05.633 [449/707] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:05:05.633 [450/707] Linking static target lib/librte_pdump.a 00:05:05.633 [451/707] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:05:05.633 [452/707] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:05:05.893 [453/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:05:05.893 [454/707] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:05:05.893 [455/707] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:05:05.893 [456/707] Linking target lib/librte_pdump.so.24.0 00:05:05.893 [457/707] Linking target lib/librte_port.so.24.0 00:05:06.151 [458/707] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:05:06.151 [459/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:05:06.410 [460/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:05:06.410 [461/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:05:06.410 [462/707] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:05:06.410 [463/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:05:06.410 [464/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:05:06.978 [465/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:05:06.978 [466/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:05:06.978 [467/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:05:06.978 [468/707] Linking static target lib/librte_table.a 00:05:06.978 [469/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:07.236 [470/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:05:07.495 [471/707] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:05:07.495 [472/707] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:05:07.495 [473/707] Linking target lib/librte_table.so.24.0 00:05:07.495 [474/707] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:05:07.495 [475/707] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:05:07.495 [476/707] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:05:07.755 [477/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:05:07.755 [478/707] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:05:08.015 [479/707] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:05:08.015 [480/707] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:05:08.015 [481/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:05:08.015 [482/707] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:05:08.273 [483/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:05:08.533 [484/707] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:05:08.533 [485/707] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:05:08.533 [486/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:05:08.533 [487/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:05:08.533 [488/707] Linking static target lib/librte_graph.a 00:05:08.792 [489/707] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:05:08.792 [490/707] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:05:09.050 [491/707] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:05:09.050 [492/707] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:05:09.050 [493/707] Linking target lib/librte_graph.so.24.0 00:05:09.309 [494/707] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:05:09.309 [495/707] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:05:09.569 [496/707] Compiling C object lib/librte_node.a.p/node_null.c.o 00:05:09.569 [497/707] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:05:09.569 [498/707] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:05:09.569 [499/707] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:05:09.569 [500/707] Compiling C object lib/librte_node.a.p/node_log.c.o 00:05:09.828 [501/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:09.828 [502/707] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:05:09.828 [503/707] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:05:09.828 [504/707] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:05:10.088 [505/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:10.088 [506/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:10.088 [507/707] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:05:10.088 [508/707] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:05:10.348 [509/707] Linking static target lib/librte_node.a 00:05:10.349 [510/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:10.349 [511/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:10.349 [512/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:10.608 [513/707] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:05:10.608 [514/707] Linking target lib/librte_node.so.24.0 00:05:10.608 [515/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:10.608 [516/707] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:10.609 [517/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:10.609 [518/707] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:10.868 [519/707] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:10.868 [520/707] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:10.868 [521/707] Linking static target drivers/librte_bus_pci.a 00:05:10.868 [522/707] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:10.868 [523/707] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:10.868 [524/707] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:10.868 [525/707] Linking static target drivers/librte_bus_vdev.a 00:05:10.868 [526/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:05:10.868 [527/707] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:11.127 [528/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:05:11.127 [529/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:05:11.127 [530/707] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:11.127 [531/707] Linking target drivers/librte_bus_vdev.so.24.0 00:05:11.127 [532/707] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:11.127 [533/707] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:11.387 [534/707] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:11.387 [535/707] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:05:11.387 [536/707] Linking target drivers/librte_bus_pci.so.24.0 00:05:11.387 [537/707] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:11.387 [538/707] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:11.387 [539/707] Linking static target drivers/librte_mempool_ring.a 00:05:11.387 [540/707] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:11.387 [541/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:05:11.387 [542/707] Linking target drivers/librte_mempool_ring.so.24.0 00:05:11.387 [543/707] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:05:11.956 [544/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:05:12.215 [545/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:05:12.216 [546/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:05:12.216 [547/707] Linking static target drivers/net/i40e/base/libi40e_base.a 00:05:12.786 [548/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:05:13.045 [549/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:05:13.305 [550/707] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:05:13.305 [551/707] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:05:13.305 [552/707] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:05:13.305 [553/707] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:05:13.305 [554/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:05:13.564 [555/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:05:13.564 [556/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:05:13.822 [557/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:05:13.822 [558/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:05:13.822 [559/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:05:14.391 [560/707] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:05:14.391 [561/707] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:05:14.391 [562/707] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:05:14.391 [563/707] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:05:14.649 [564/707] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:05:14.909 [565/707] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:05:14.909 [566/707] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:05:14.909 [567/707] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:05:14.909 [568/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:05:14.909 [569/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:05:15.168 [570/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:05:15.168 [571/707] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:05:15.168 [572/707] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:05:15.168 [573/707] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:05:15.427 [574/707] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:05:15.427 [575/707] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:05:15.686 [576/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:05:15.686 [577/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:05:15.686 [578/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:05:15.945 [579/707] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:05:15.945 [580/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:05:15.945 [581/707] Linking static target drivers/libtmp_rte_net_i40e.a 00:05:16.204 [582/707] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:05:16.204 [583/707] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:05:16.204 [584/707] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:05:16.204 [585/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:05:16.204 [586/707] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:05:16.204 [587/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:05:16.204 [588/707] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:05:16.463 [589/707] Linking static target drivers/librte_net_i40e.a 00:05:16.463 [590/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:05:16.722 [591/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:05:16.982 [592/707] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:05:16.982 [593/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:05:16.982 [594/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:05:16.982 [595/707] Linking target drivers/librte_net_i40e.so.24.0 00:05:17.242 [596/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:05:17.242 [597/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:05:17.242 [598/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:05:17.501 [599/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:05:17.501 [600/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:05:17.501 [601/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:05:17.760 [602/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:05:17.760 [603/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:05:17.760 [604/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:05:18.019 [605/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:05:18.019 [606/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:05:18.020 [607/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:18.279 [608/707] Linking static target lib/librte_vhost.a 00:05:18.279 [609/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:05:18.279 [610/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:05:18.279 [611/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:05:18.279 [612/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:05:18.279 [613/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:05:18.538 [614/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:05:18.538 [615/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:05:18.798 [616/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:05:18.798 [617/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:05:18.798 [618/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:05:19.368 [619/707] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:19.368 [620/707] Linking target lib/librte_vhost.so.24.0 00:05:19.628 [621/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:05:19.628 [622/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:05:19.887 [623/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:05:19.887 [624/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:05:19.887 [625/707] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:05:19.887 [626/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:05:19.887 [627/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:05:20.145 [628/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:05:20.145 [629/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:05:20.145 [630/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:05:20.145 [631/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:05:20.404 [632/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:05:20.404 [633/707] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:05:20.404 [634/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:05:20.404 [635/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:05:20.724 [636/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:05:20.725 [637/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:05:20.725 [638/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:05:21.009 [639/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:05:21.009 [640/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:05:21.009 [641/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:05:21.009 [642/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:05:21.009 [643/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:05:21.267 [644/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:05:21.267 [645/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:05:21.527 [646/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:05:21.527 [647/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:05:21.527 [648/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:05:21.527 [649/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:05:21.527 [650/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:05:21.786 [651/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:05:22.045 [652/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:05:22.045 [653/707] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:05:22.045 [654/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:05:22.305 [655/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:05:22.305 [656/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:05:22.305 [657/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:05:22.305 [658/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:05:22.564 [659/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:05:22.822 [660/707] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:05:22.822 [661/707] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:05:23.080 [662/707] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:05:23.081 [663/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:05:23.081 [664/707] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:05:23.339 [665/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:05:23.339 [666/707] Linking static target lib/librte_pipeline.a 00:05:23.339 [667/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:05:23.597 [668/707] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:05:23.597 [669/707] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:05:23.597 [670/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:05:23.855 [671/707] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:05:23.855 [672/707] Linking target app/dpdk-dumpcap 00:05:23.855 [673/707] Linking target app/dpdk-graph 00:05:24.113 [674/707] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:05:24.113 [675/707] Linking target app/dpdk-pdump 00:05:24.113 [676/707] Linking target app/dpdk-proc-info 00:05:24.372 [677/707] Linking target app/dpdk-test-acl 00:05:24.372 [678/707] Linking target app/dpdk-test-bbdev 00:05:24.372 [679/707] Linking target app/dpdk-test-crypto-perf 00:05:24.372 [680/707] Linking target app/dpdk-test-compress-perf 00:05:24.372 [681/707] Linking target app/dpdk-test-cmdline 00:05:24.649 [682/707] Linking target app/dpdk-test-dma-perf 00:05:24.649 [683/707] Linking target app/dpdk-test-eventdev 00:05:24.940 [684/707] Linking target app/dpdk-test-fib 00:05:24.940 [685/707] Linking target app/dpdk-test-flow-perf 00:05:24.940 [686/707] Linking target app/dpdk-test-gpudev 00:05:24.940 [687/707] Linking target app/dpdk-test-mldev 00:05:24.940 [688/707] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:05:24.940 [689/707] Linking target app/dpdk-test-pipeline 00:05:25.198 [690/707] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:05:25.199 [691/707] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:05:25.458 [692/707] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:05:25.718 [693/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:05:25.718 [694/707] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:05:25.977 [695/707] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:05:25.977 [696/707] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:05:25.977 [697/707] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:05:25.977 [698/707] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:05:26.237 [699/707] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:05:26.237 [700/707] Linking target app/dpdk-test-sad 00:05:26.497 [701/707] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:26.497 [702/707] Linking target app/dpdk-test-regex 00:05:26.497 [703/707] Linking target lib/librte_pipeline.so.24.0 00:05:26.497 [704/707] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:05:26.756 [705/707] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:05:27.015 [706/707] Linking target app/dpdk-test-security-perf 00:05:27.015 [707/707] Linking target app/dpdk-testpmd 00:05:27.274 16:21:08 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:05:27.274 16:21:08 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:05:27.274 16:21:08 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:05:27.274 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:05:27.274 [0/1] Installing files. 00:05:27.536 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:05:27.536 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:27.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:05:27.538 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:05:27.539 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:05:27.540 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:05:27.541 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:05:27.541 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:05:27.541 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:05:27.541 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:05:27.541 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:05:27.541 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.541 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.801 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.802 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.802 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.802 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.802 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.802 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.802 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.802 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.802 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.802 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.802 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.802 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.802 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.802 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.802 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:27.802 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:28.064 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:28.064 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:28.064 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:28.064 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:05:28.064 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:28.064 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:05:28.064 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:28.064 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:05:28.064 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:28.064 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:05:28.064 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:28.064 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:28.064 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:28.064 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:28.064 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:28.064 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:28.064 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:28.064 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:28.064 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:28.064 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:28.064 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:28.064 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:28.064 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:28.064 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:28.064 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:28.064 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:28.064 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:28.064 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:28.064 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:28.064 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.064 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.065 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.066 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.067 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.067 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.067 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.067 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.067 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.067 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.067 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.067 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.067 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.067 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.067 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.067 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.067 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.067 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.067 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.067 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.067 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.067 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.067 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.067 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.067 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.067 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.067 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.067 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.067 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.067 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.067 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.067 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.067 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.067 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:28.067 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:28.067 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:28.067 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:28.067 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:28.067 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:28.067 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:28.067 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:05:28.067 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:05:28.067 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:05:28.067 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:05:28.067 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:05:28.067 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:05:28.067 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:05:28.067 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:05:28.067 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:05:28.067 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:05:28.067 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:05:28.067 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:05:28.067 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:05:28.067 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:05:28.067 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:05:28.067 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:05:28.067 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:05:28.067 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:05:28.067 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:05:28.067 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:05:28.067 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:05:28.067 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:05:28.067 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:05:28.067 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:05:28.067 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:05:28.067 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:05:28.067 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:05:28.067 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:05:28.067 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:05:28.067 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:05:28.067 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:05:28.067 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:05:28.067 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:05:28.067 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:05:28.067 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:05:28.067 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:05:28.067 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:05:28.067 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:05:28.067 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:05:28.067 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:05:28.067 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:05:28.067 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:05:28.067 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:05:28.067 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:05:28.067 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:05:28.067 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:05:28.067 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:05:28.067 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:05:28.067 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:05:28.067 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:05:28.067 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:05:28.067 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:05:28.067 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:05:28.067 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:05:28.067 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:05:28.067 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:05:28.067 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:05:28.067 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:05:28.067 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:05:28.067 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:05:28.067 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:05:28.067 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:05:28.067 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:05:28.067 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:05:28.067 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:05:28.067 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:05:28.067 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:05:28.067 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:05:28.067 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:05:28.068 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:05:28.068 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:05:28.068 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:05:28.068 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:05:28.068 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:05:28.068 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:05:28.068 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:05:28.068 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:05:28.068 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:05:28.068 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:05:28.068 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:05:28.068 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:05:28.068 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:05:28.068 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:05:28.068 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:05:28.068 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:05:28.068 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:05:28.068 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:05:28.068 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:05:28.068 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:05:28.068 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:05:28.068 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:05:28.068 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:05:28.068 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:05:28.068 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:05:28.068 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:05:28.068 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:05:28.068 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:05:28.068 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:05:28.068 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:05:28.068 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:05:28.068 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:05:28.068 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:05:28.068 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:05:28.068 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:05:28.068 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:05:28.068 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:05:28.068 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:05:28.068 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:05:28.068 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:05:28.068 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:05:28.068 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:05:28.068 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:05:28.068 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:05:28.068 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:05:28.068 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:05:28.068 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:05:28.068 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:05:28.068 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:05:28.068 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:05:28.068 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:05:28.068 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:05:28.068 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:05:28.068 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:05:28.068 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:05:28.068 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:05:28.068 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:05:28.068 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:05:28.068 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:28.068 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:05:28.068 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:28.068 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:05:28.068 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:28.068 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:05:28.068 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:28.068 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:05:28.068 16:21:09 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:05:28.068 ************************************ 00:05:28.068 END TEST build_native_dpdk 00:05:28.068 ************************************ 00:05:28.068 16:21:09 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /home/vagrant/spdk_repo/spdk 00:05:28.068 00:05:28.068 real 0m51.611s 00:05:28.068 user 6m1.568s 00:05:28.068 sys 0m58.092s 00:05:28.068 16:21:09 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:28.068 16:21:09 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:05:28.327 16:21:09 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:05:28.327 16:21:09 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:05:28.327 16:21:09 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:05:28.327 16:21:09 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:05:28.327 16:21:09 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:05:28.327 16:21:09 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:05:28.327 16:21:09 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:05:28.327 16:21:09 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:05:28.327 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:05:28.586 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:05:28.586 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:05:28.586 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:28.845 Using 'verbs' RDMA provider 00:05:42.417 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:05:54.615 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:05:54.615 Creating mk/config.mk...done. 00:05:54.615 Creating mk/cc.flags.mk...done. 00:05:54.615 Type 'make' to build. 00:05:54.615 16:21:35 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:05:54.615 16:21:35 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:05:54.615 16:21:35 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:05:54.615 16:21:35 -- common/autotest_common.sh@10 -- $ set +x 00:05:54.615 ************************************ 00:05:54.615 START TEST make 00:05:54.615 ************************************ 00:05:54.615 16:21:35 make -- common/autotest_common.sh@1129 -- $ make -j10 00:05:54.615 make[1]: Nothing to be done for 'all'. 00:07:02.286 CC lib/ut/ut.o 00:07:02.286 CC lib/log/log_flags.o 00:07:02.286 CC lib/log/log.o 00:07:02.286 CC lib/log/log_deprecated.o 00:07:02.286 CC lib/ut_mock/mock.o 00:07:02.286 LIB libspdk_ut.a 00:07:02.286 LIB libspdk_log.a 00:07:02.286 SO libspdk_ut.so.2.0 00:07:02.286 LIB libspdk_ut_mock.a 00:07:02.286 SO libspdk_log.so.7.1 00:07:02.286 SO libspdk_ut_mock.so.6.0 00:07:02.286 SYMLINK libspdk_ut.so 00:07:02.286 SYMLINK libspdk_ut_mock.so 00:07:02.286 SYMLINK libspdk_log.so 00:07:02.286 CC lib/ioat/ioat.o 00:07:02.286 CC lib/dma/dma.o 00:07:02.286 CXX lib/trace_parser/trace.o 00:07:02.286 CC lib/util/base64.o 00:07:02.286 CC lib/util/bit_array.o 00:07:02.286 CC lib/util/cpuset.o 00:07:02.286 CC lib/util/crc16.o 00:07:02.286 CC lib/util/crc32.o 00:07:02.286 CC lib/util/crc32c.o 00:07:02.286 CC lib/vfio_user/host/vfio_user_pci.o 00:07:02.286 CC lib/util/crc32_ieee.o 00:07:02.286 CC lib/util/crc64.o 00:07:02.286 CC lib/util/dif.o 00:07:02.286 CC lib/vfio_user/host/vfio_user.o 00:07:02.286 CC lib/util/fd.o 00:07:02.286 CC lib/util/fd_group.o 00:07:02.286 LIB libspdk_dma.a 00:07:02.286 SO libspdk_dma.so.5.0 00:07:02.286 CC lib/util/file.o 00:07:02.286 SYMLINK libspdk_dma.so 00:07:02.286 CC lib/util/hexlify.o 00:07:02.286 CC lib/util/iov.o 00:07:02.286 LIB libspdk_ioat.a 00:07:02.286 SO libspdk_ioat.so.7.0 00:07:02.286 CC lib/util/math.o 00:07:02.286 CC lib/util/net.o 00:07:02.286 SYMLINK libspdk_ioat.so 00:07:02.286 CC lib/util/pipe.o 00:07:02.286 LIB libspdk_vfio_user.a 00:07:02.286 CC lib/util/strerror_tls.o 00:07:02.286 SO libspdk_vfio_user.so.5.0 00:07:02.286 CC lib/util/string.o 00:07:02.286 CC lib/util/uuid.o 00:07:02.286 SYMLINK libspdk_vfio_user.so 00:07:02.286 CC lib/util/xor.o 00:07:02.286 CC lib/util/zipf.o 00:07:02.286 CC lib/util/md5.o 00:07:02.286 LIB libspdk_util.a 00:07:02.286 SO libspdk_util.so.10.1 00:07:02.286 LIB libspdk_trace_parser.a 00:07:02.286 SO libspdk_trace_parser.so.6.0 00:07:02.287 SYMLINK libspdk_util.so 00:07:02.287 SYMLINK libspdk_trace_parser.so 00:07:02.287 CC lib/json/json_parse.o 00:07:02.287 CC lib/json/json_util.o 00:07:02.287 CC lib/json/json_write.o 00:07:02.287 CC lib/idxd/idxd.o 00:07:02.287 CC lib/idxd/idxd_user.o 00:07:02.287 CC lib/idxd/idxd_kernel.o 00:07:02.287 CC lib/conf/conf.o 00:07:02.287 CC lib/vmd/vmd.o 00:07:02.287 CC lib/env_dpdk/env.o 00:07:02.287 CC lib/rdma_utils/rdma_utils.o 00:07:02.287 CC lib/env_dpdk/memory.o 00:07:02.287 LIB libspdk_conf.a 00:07:02.287 CC lib/env_dpdk/pci.o 00:07:02.287 CC lib/env_dpdk/init.o 00:07:02.287 SO libspdk_conf.so.6.0 00:07:02.287 SYMLINK libspdk_conf.so 00:07:02.287 CC lib/vmd/led.o 00:07:02.287 CC lib/env_dpdk/threads.o 00:07:02.287 LIB libspdk_json.a 00:07:02.287 LIB libspdk_rdma_utils.a 00:07:02.287 SO libspdk_rdma_utils.so.1.0 00:07:02.287 SO libspdk_json.so.6.0 00:07:02.287 CC lib/env_dpdk/pci_ioat.o 00:07:02.287 CC lib/env_dpdk/pci_virtio.o 00:07:02.287 SYMLINK libspdk_rdma_utils.so 00:07:02.287 CC lib/env_dpdk/pci_vmd.o 00:07:02.287 SYMLINK libspdk_json.so 00:07:02.287 CC lib/env_dpdk/pci_idxd.o 00:07:02.287 CC lib/env_dpdk/pci_event.o 00:07:02.287 LIB libspdk_vmd.a 00:07:02.287 CC lib/env_dpdk/sigbus_handler.o 00:07:02.287 CC lib/env_dpdk/pci_dpdk.o 00:07:02.287 CC lib/rdma_provider/common.o 00:07:02.287 SO libspdk_vmd.so.6.0 00:07:02.287 LIB libspdk_idxd.a 00:07:02.287 CC lib/env_dpdk/pci_dpdk_2207.o 00:07:02.287 CC lib/rdma_provider/rdma_provider_verbs.o 00:07:02.287 SO libspdk_idxd.so.12.1 00:07:02.287 SYMLINK libspdk_vmd.so 00:07:02.287 CC lib/jsonrpc/jsonrpc_server.o 00:07:02.287 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:07:02.287 SYMLINK libspdk_idxd.so 00:07:02.287 CC lib/jsonrpc/jsonrpc_client.o 00:07:02.287 CC lib/env_dpdk/pci_dpdk_2211.o 00:07:02.287 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:07:02.287 LIB libspdk_rdma_provider.a 00:07:02.287 SO libspdk_rdma_provider.so.7.0 00:07:02.287 SYMLINK libspdk_rdma_provider.so 00:07:02.287 LIB libspdk_jsonrpc.a 00:07:02.287 SO libspdk_jsonrpc.so.6.0 00:07:02.287 SYMLINK libspdk_jsonrpc.so 00:07:02.287 CC lib/rpc/rpc.o 00:07:02.287 LIB libspdk_env_dpdk.a 00:07:02.287 SO libspdk_env_dpdk.so.15.1 00:07:02.287 LIB libspdk_rpc.a 00:07:02.287 SO libspdk_rpc.so.6.0 00:07:02.287 SYMLINK libspdk_env_dpdk.so 00:07:02.287 SYMLINK libspdk_rpc.so 00:07:02.287 CC lib/trace/trace.o 00:07:02.287 CC lib/trace/trace_flags.o 00:07:02.287 CC lib/trace/trace_rpc.o 00:07:02.287 CC lib/keyring/keyring.o 00:07:02.287 CC lib/keyring/keyring_rpc.o 00:07:02.287 CC lib/notify/notify_rpc.o 00:07:02.287 CC lib/notify/notify.o 00:07:02.287 LIB libspdk_notify.a 00:07:02.287 LIB libspdk_keyring.a 00:07:02.287 SO libspdk_notify.so.6.0 00:07:02.287 SO libspdk_keyring.so.2.0 00:07:02.287 LIB libspdk_trace.a 00:07:02.287 SYMLINK libspdk_notify.so 00:07:02.287 SYMLINK libspdk_keyring.so 00:07:02.287 SO libspdk_trace.so.11.0 00:07:02.287 SYMLINK libspdk_trace.so 00:07:02.287 CC lib/sock/sock.o 00:07:02.287 CC lib/sock/sock_rpc.o 00:07:02.287 CC lib/thread/thread.o 00:07:02.287 CC lib/thread/iobuf.o 00:07:02.287 LIB libspdk_sock.a 00:07:02.287 SO libspdk_sock.so.10.0 00:07:02.287 SYMLINK libspdk_sock.so 00:07:02.287 CC lib/nvme/nvme_ctrlr_cmd.o 00:07:02.287 CC lib/nvme/nvme_fabric.o 00:07:02.287 CC lib/nvme/nvme_ctrlr.o 00:07:02.287 CC lib/nvme/nvme_ns_cmd.o 00:07:02.287 CC lib/nvme/nvme_ns.o 00:07:02.287 CC lib/nvme/nvme_pcie_common.o 00:07:02.287 CC lib/nvme/nvme_pcie.o 00:07:02.287 CC lib/nvme/nvme_qpair.o 00:07:02.287 CC lib/nvme/nvme.o 00:07:02.287 CC lib/nvme/nvme_quirks.o 00:07:02.287 CC lib/nvme/nvme_transport.o 00:07:02.287 CC lib/nvme/nvme_discovery.o 00:07:02.287 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:07:02.287 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:07:02.287 CC lib/nvme/nvme_tcp.o 00:07:02.287 LIB libspdk_thread.a 00:07:02.287 SO libspdk_thread.so.11.0 00:07:02.287 CC lib/nvme/nvme_opal.o 00:07:02.287 SYMLINK libspdk_thread.so 00:07:02.287 CC lib/nvme/nvme_io_msg.o 00:07:02.287 CC lib/nvme/nvme_poll_group.o 00:07:02.287 CC lib/nvme/nvme_zns.o 00:07:02.287 CC lib/nvme/nvme_stubs.o 00:07:02.287 CC lib/nvme/nvme_auth.o 00:07:02.287 CC lib/accel/accel.o 00:07:02.287 CC lib/blob/blobstore.o 00:07:02.287 CC lib/blob/request.o 00:07:02.287 CC lib/blob/zeroes.o 00:07:02.287 CC lib/blob/blob_bs_dev.o 00:07:02.287 CC lib/accel/accel_rpc.o 00:07:02.287 CC lib/accel/accel_sw.o 00:07:02.287 CC lib/nvme/nvme_cuse.o 00:07:02.287 CC lib/nvme/nvme_rdma.o 00:07:02.287 CC lib/init/json_config.o 00:07:02.287 CC lib/virtio/virtio.o 00:07:02.287 CC lib/fsdev/fsdev.o 00:07:02.287 CC lib/fsdev/fsdev_io.o 00:07:02.545 CC lib/fsdev/fsdev_rpc.o 00:07:02.545 CC lib/virtio/virtio_vhost_user.o 00:07:02.545 CC lib/init/subsystem.o 00:07:02.545 CC lib/virtio/virtio_vfio_user.o 00:07:02.803 CC lib/init/subsystem_rpc.o 00:07:02.803 CC lib/virtio/virtio_pci.o 00:07:02.803 CC lib/init/rpc.o 00:07:03.062 LIB libspdk_init.a 00:07:03.062 SO libspdk_init.so.6.0 00:07:03.062 LIB libspdk_fsdev.a 00:07:03.321 LIB libspdk_virtio.a 00:07:03.321 SO libspdk_fsdev.so.2.0 00:07:03.321 SYMLINK libspdk_init.so 00:07:03.321 SO libspdk_virtio.so.7.0 00:07:03.321 SYMLINK libspdk_fsdev.so 00:07:03.321 LIB libspdk_accel.a 00:07:03.321 SYMLINK libspdk_virtio.so 00:07:03.321 SO libspdk_accel.so.16.0 00:07:03.579 SYMLINK libspdk_accel.so 00:07:03.579 CC lib/event/reactor.o 00:07:03.579 CC lib/event/app_rpc.o 00:07:03.579 CC lib/event/app.o 00:07:03.579 CC lib/event/scheduler_static.o 00:07:03.579 CC lib/event/log_rpc.o 00:07:03.579 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:07:03.837 CC lib/bdev/bdev.o 00:07:03.837 CC lib/bdev/bdev_rpc.o 00:07:03.837 CC lib/bdev/bdev_zone.o 00:07:03.837 CC lib/bdev/part.o 00:07:04.150 LIB libspdk_nvme.a 00:07:04.150 CC lib/bdev/scsi_nvme.o 00:07:04.150 SO libspdk_nvme.so.15.0 00:07:04.408 LIB libspdk_fuse_dispatcher.a 00:07:04.408 SO libspdk_fuse_dispatcher.so.1.0 00:07:04.408 LIB libspdk_event.a 00:07:04.408 SYMLINK libspdk_fuse_dispatcher.so 00:07:04.408 SYMLINK libspdk_nvme.so 00:07:04.666 SO libspdk_event.so.14.0 00:07:04.666 SYMLINK libspdk_event.so 00:07:06.042 LIB libspdk_blob.a 00:07:06.042 SO libspdk_blob.so.12.0 00:07:06.302 SYMLINK libspdk_blob.so 00:07:06.561 CC lib/blobfs/blobfs.o 00:07:06.561 CC lib/blobfs/tree.o 00:07:06.561 CC lib/lvol/lvol.o 00:07:07.496 LIB libspdk_bdev.a 00:07:07.496 LIB libspdk_blobfs.a 00:07:07.496 SO libspdk_bdev.so.17.0 00:07:07.496 SO libspdk_blobfs.so.11.0 00:07:07.752 SYMLINK libspdk_blobfs.so 00:07:07.752 SYMLINK libspdk_bdev.so 00:07:07.752 LIB libspdk_lvol.a 00:07:07.752 SO libspdk_lvol.so.11.0 00:07:08.010 SYMLINK libspdk_lvol.so 00:07:08.010 CC lib/ftl/ftl_core.o 00:07:08.010 CC lib/ftl/ftl_init.o 00:07:08.010 CC lib/ftl/ftl_layout.o 00:07:08.010 CC lib/ftl/ftl_io.o 00:07:08.010 CC lib/ftl/ftl_debug.o 00:07:08.010 CC lib/ftl/ftl_sb.o 00:07:08.010 CC lib/nvmf/ctrlr.o 00:07:08.010 CC lib/scsi/dev.o 00:07:08.010 CC lib/nbd/nbd.o 00:07:08.010 CC lib/ublk/ublk.o 00:07:08.010 CC lib/ublk/ublk_rpc.o 00:07:08.010 CC lib/nvmf/ctrlr_discovery.o 00:07:08.267 CC lib/nvmf/ctrlr_bdev.o 00:07:08.267 CC lib/scsi/lun.o 00:07:08.267 CC lib/ftl/ftl_l2p.o 00:07:08.267 CC lib/scsi/port.o 00:07:08.267 CC lib/scsi/scsi.o 00:07:08.267 CC lib/nbd/nbd_rpc.o 00:07:08.524 CC lib/ftl/ftl_l2p_flat.o 00:07:08.524 CC lib/scsi/scsi_bdev.o 00:07:08.524 CC lib/nvmf/subsystem.o 00:07:08.524 CC lib/nvmf/nvmf.o 00:07:08.524 CC lib/ftl/ftl_nv_cache.o 00:07:08.524 LIB libspdk_nbd.a 00:07:08.780 SO libspdk_nbd.so.7.0 00:07:08.780 CC lib/ftl/ftl_band.o 00:07:08.780 SYMLINK libspdk_nbd.so 00:07:08.780 CC lib/ftl/ftl_band_ops.o 00:07:08.780 LIB libspdk_ublk.a 00:07:08.780 CC lib/nvmf/nvmf_rpc.o 00:07:08.780 SO libspdk_ublk.so.3.0 00:07:08.780 SYMLINK libspdk_ublk.so 00:07:08.780 CC lib/nvmf/transport.o 00:07:09.036 CC lib/nvmf/tcp.o 00:07:09.293 CC lib/scsi/scsi_pr.o 00:07:09.293 CC lib/scsi/scsi_rpc.o 00:07:09.293 CC lib/scsi/task.o 00:07:09.293 CC lib/nvmf/stubs.o 00:07:09.550 CC lib/ftl/ftl_writer.o 00:07:09.551 LIB libspdk_scsi.a 00:07:09.551 CC lib/ftl/ftl_rq.o 00:07:09.807 SO libspdk_scsi.so.9.0 00:07:09.807 CC lib/ftl/ftl_reloc.o 00:07:09.807 CC lib/nvmf/mdns_server.o 00:07:09.807 SYMLINK libspdk_scsi.so 00:07:09.807 CC lib/nvmf/rdma.o 00:07:10.065 CC lib/nvmf/auth.o 00:07:10.065 CC lib/ftl/ftl_l2p_cache.o 00:07:10.065 CC lib/iscsi/conn.o 00:07:10.323 CC lib/ftl/ftl_p2l.o 00:07:10.323 CC lib/ftl/ftl_p2l_log.o 00:07:10.323 CC lib/vhost/vhost.o 00:07:10.633 CC lib/vhost/vhost_rpc.o 00:07:10.633 CC lib/vhost/vhost_scsi.o 00:07:10.633 CC lib/iscsi/init_grp.o 00:07:10.633 CC lib/iscsi/iscsi.o 00:07:10.892 CC lib/ftl/mngt/ftl_mngt.o 00:07:11.150 CC lib/iscsi/param.o 00:07:11.150 CC lib/iscsi/portal_grp.o 00:07:11.150 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:07:11.150 CC lib/vhost/vhost_blk.o 00:07:11.408 CC lib/iscsi/tgt_node.o 00:07:11.408 CC lib/iscsi/iscsi_subsystem.o 00:07:11.408 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:07:11.408 CC lib/iscsi/iscsi_rpc.o 00:07:11.666 CC lib/vhost/rte_vhost_user.o 00:07:11.666 CC lib/ftl/mngt/ftl_mngt_startup.o 00:07:11.666 CC lib/iscsi/task.o 00:07:11.924 CC lib/ftl/mngt/ftl_mngt_md.o 00:07:11.924 CC lib/ftl/mngt/ftl_mngt_misc.o 00:07:11.924 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:07:12.183 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:07:12.183 CC lib/ftl/mngt/ftl_mngt_band.o 00:07:12.183 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:07:12.183 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:07:12.183 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:07:12.441 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:07:12.441 CC lib/ftl/utils/ftl_conf.o 00:07:12.441 CC lib/ftl/utils/ftl_md.o 00:07:12.441 CC lib/ftl/utils/ftl_mempool.o 00:07:12.441 CC lib/ftl/utils/ftl_bitmap.o 00:07:12.699 CC lib/ftl/utils/ftl_property.o 00:07:12.699 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:07:12.699 LIB libspdk_iscsi.a 00:07:12.700 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:07:12.700 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:07:12.700 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:07:12.700 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:07:12.958 SO libspdk_iscsi.so.8.0 00:07:12.958 LIB libspdk_vhost.a 00:07:12.958 SO libspdk_vhost.so.8.0 00:07:12.958 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:07:12.958 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:07:12.958 CC lib/ftl/upgrade/ftl_sb_v3.o 00:07:12.958 CC lib/ftl/upgrade/ftl_sb_v5.o 00:07:12.958 SYMLINK libspdk_vhost.so 00:07:12.958 CC lib/ftl/nvc/ftl_nvc_dev.o 00:07:12.958 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:07:13.215 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:07:13.215 SYMLINK libspdk_iscsi.so 00:07:13.215 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:07:13.215 CC lib/ftl/base/ftl_base_dev.o 00:07:13.215 CC lib/ftl/base/ftl_base_bdev.o 00:07:13.215 CC lib/ftl/ftl_trace.o 00:07:13.480 LIB libspdk_ftl.a 00:07:13.741 LIB libspdk_nvmf.a 00:07:13.741 SO libspdk_ftl.so.9.0 00:07:14.000 SO libspdk_nvmf.so.20.0 00:07:14.000 SYMLINK libspdk_ftl.so 00:07:14.261 SYMLINK libspdk_nvmf.so 00:07:14.520 CC module/env_dpdk/env_dpdk_rpc.o 00:07:14.778 CC module/accel/iaa/accel_iaa.o 00:07:14.778 CC module/blob/bdev/blob_bdev.o 00:07:14.778 CC module/accel/dsa/accel_dsa.o 00:07:14.778 CC module/keyring/file/keyring.o 00:07:14.778 CC module/scheduler/dynamic/scheduler_dynamic.o 00:07:14.778 CC module/accel/error/accel_error.o 00:07:14.778 CC module/sock/posix/posix.o 00:07:14.778 CC module/fsdev/aio/fsdev_aio.o 00:07:14.778 CC module/accel/ioat/accel_ioat.o 00:07:14.778 LIB libspdk_env_dpdk_rpc.a 00:07:14.778 SO libspdk_env_dpdk_rpc.so.6.0 00:07:14.778 SYMLINK libspdk_env_dpdk_rpc.so 00:07:14.778 CC module/accel/ioat/accel_ioat_rpc.o 00:07:14.778 CC module/keyring/file/keyring_rpc.o 00:07:14.778 CC module/accel/iaa/accel_iaa_rpc.o 00:07:14.778 LIB libspdk_scheduler_dynamic.a 00:07:14.778 CC module/accel/error/accel_error_rpc.o 00:07:15.037 SO libspdk_scheduler_dynamic.so.4.0 00:07:15.037 CC module/fsdev/aio/fsdev_aio_rpc.o 00:07:15.037 SYMLINK libspdk_scheduler_dynamic.so 00:07:15.037 CC module/accel/dsa/accel_dsa_rpc.o 00:07:15.037 LIB libspdk_accel_iaa.a 00:07:15.037 LIB libspdk_blob_bdev.a 00:07:15.037 LIB libspdk_accel_error.a 00:07:15.037 LIB libspdk_keyring_file.a 00:07:15.037 SO libspdk_blob_bdev.so.12.0 00:07:15.037 SO libspdk_accel_iaa.so.3.0 00:07:15.037 LIB libspdk_accel_ioat.a 00:07:15.037 SO libspdk_accel_error.so.2.0 00:07:15.037 SO libspdk_keyring_file.so.2.0 00:07:15.037 SO libspdk_accel_ioat.so.6.0 00:07:15.037 SYMLINK libspdk_blob_bdev.so 00:07:15.037 LIB libspdk_accel_dsa.a 00:07:15.037 SYMLINK libspdk_accel_iaa.so 00:07:15.295 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:07:15.295 SYMLINK libspdk_accel_error.so 00:07:15.295 CC module/fsdev/aio/linux_aio_mgr.o 00:07:15.295 SO libspdk_accel_dsa.so.5.0 00:07:15.295 SYMLINK libspdk_keyring_file.so 00:07:15.295 SYMLINK libspdk_accel_ioat.so 00:07:15.295 SYMLINK libspdk_accel_dsa.so 00:07:15.295 CC module/scheduler/gscheduler/gscheduler.o 00:07:15.295 LIB libspdk_scheduler_dpdk_governor.a 00:07:15.295 CC module/keyring/linux/keyring.o 00:07:15.295 SO libspdk_scheduler_dpdk_governor.so.4.0 00:07:15.554 CC module/bdev/error/vbdev_error.o 00:07:15.554 CC module/blobfs/bdev/blobfs_bdev.o 00:07:15.554 CC module/bdev/delay/vbdev_delay.o 00:07:15.554 CC module/bdev/gpt/gpt.o 00:07:15.554 SYMLINK libspdk_scheduler_dpdk_governor.so 00:07:15.554 CC module/bdev/delay/vbdev_delay_rpc.o 00:07:15.554 CC module/keyring/linux/keyring_rpc.o 00:07:15.554 LIB libspdk_scheduler_gscheduler.a 00:07:15.554 CC module/bdev/lvol/vbdev_lvol.o 00:07:15.554 SO libspdk_scheduler_gscheduler.so.4.0 00:07:15.554 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:07:15.554 LIB libspdk_sock_posix.a 00:07:15.554 LIB libspdk_fsdev_aio.a 00:07:15.554 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:07:15.554 CC module/bdev/gpt/vbdev_gpt.o 00:07:15.554 SO libspdk_sock_posix.so.6.0 00:07:15.813 SO libspdk_fsdev_aio.so.1.0 00:07:15.813 LIB libspdk_keyring_linux.a 00:07:15.813 SYMLINK libspdk_scheduler_gscheduler.so 00:07:15.813 SO libspdk_keyring_linux.so.1.0 00:07:15.813 SYMLINK libspdk_fsdev_aio.so 00:07:15.813 CC module/bdev/error/vbdev_error_rpc.o 00:07:15.813 SYMLINK libspdk_keyring_linux.so 00:07:15.813 SYMLINK libspdk_sock_posix.so 00:07:15.813 LIB libspdk_blobfs_bdev.a 00:07:15.813 SO libspdk_blobfs_bdev.so.6.0 00:07:15.813 CC module/bdev/malloc/bdev_malloc.o 00:07:15.813 LIB libspdk_bdev_delay.a 00:07:15.813 SO libspdk_bdev_delay.so.6.0 00:07:15.813 CC module/bdev/null/bdev_null.o 00:07:16.071 SYMLINK libspdk_blobfs_bdev.so 00:07:16.071 CC module/bdev/null/bdev_null_rpc.o 00:07:16.071 LIB libspdk_bdev_error.a 00:07:16.071 CC module/bdev/nvme/bdev_nvme.o 00:07:16.071 CC module/bdev/passthru/vbdev_passthru.o 00:07:16.071 LIB libspdk_bdev_gpt.a 00:07:16.071 SYMLINK libspdk_bdev_delay.so 00:07:16.071 SO libspdk_bdev_error.so.6.0 00:07:16.071 CC module/bdev/malloc/bdev_malloc_rpc.o 00:07:16.071 SO libspdk_bdev_gpt.so.6.0 00:07:16.071 SYMLINK libspdk_bdev_error.so 00:07:16.071 SYMLINK libspdk_bdev_gpt.so 00:07:16.071 CC module/bdev/nvme/bdev_nvme_rpc.o 00:07:16.329 LIB libspdk_bdev_lvol.a 00:07:16.329 CC module/bdev/raid/bdev_raid.o 00:07:16.329 LIB libspdk_bdev_null.a 00:07:16.329 SO libspdk_bdev_lvol.so.6.0 00:07:16.329 CC module/bdev/zone_block/vbdev_zone_block.o 00:07:16.329 SO libspdk_bdev_null.so.6.0 00:07:16.329 CC module/bdev/split/vbdev_split.o 00:07:16.329 SYMLINK libspdk_bdev_lvol.so 00:07:16.329 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:07:16.329 LIB libspdk_bdev_malloc.a 00:07:16.329 CC module/bdev/raid/bdev_raid_rpc.o 00:07:16.329 CC module/bdev/aio/bdev_aio.o 00:07:16.329 SYMLINK libspdk_bdev_null.so 00:07:16.329 SO libspdk_bdev_malloc.so.6.0 00:07:16.587 SYMLINK libspdk_bdev_malloc.so 00:07:16.587 CC module/bdev/nvme/nvme_rpc.o 00:07:16.587 LIB libspdk_bdev_passthru.a 00:07:16.587 CC module/bdev/ftl/bdev_ftl.o 00:07:16.587 CC module/bdev/split/vbdev_split_rpc.o 00:07:16.587 SO libspdk_bdev_passthru.so.6.0 00:07:16.587 CC module/bdev/nvme/bdev_mdns_client.o 00:07:16.587 SYMLINK libspdk_bdev_passthru.so 00:07:16.587 CC module/bdev/nvme/vbdev_opal.o 00:07:16.587 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:07:16.846 LIB libspdk_bdev_split.a 00:07:16.846 CC module/bdev/nvme/vbdev_opal_rpc.o 00:07:16.846 SO libspdk_bdev_split.so.6.0 00:07:16.846 CC module/bdev/aio/bdev_aio_rpc.o 00:07:16.846 CC module/bdev/ftl/bdev_ftl_rpc.o 00:07:16.846 SYMLINK libspdk_bdev_split.so 00:07:16.846 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:07:16.846 LIB libspdk_bdev_zone_block.a 00:07:16.846 SO libspdk_bdev_zone_block.so.6.0 00:07:16.846 CC module/bdev/raid/bdev_raid_sb.o 00:07:16.846 LIB libspdk_bdev_aio.a 00:07:16.846 CC module/bdev/raid/raid0.o 00:07:17.105 SYMLINK libspdk_bdev_zone_block.so 00:07:17.105 CC module/bdev/raid/raid1.o 00:07:17.105 SO libspdk_bdev_aio.so.6.0 00:07:17.105 CC module/bdev/iscsi/bdev_iscsi.o 00:07:17.105 CC module/bdev/raid/concat.o 00:07:17.105 LIB libspdk_bdev_ftl.a 00:07:17.105 SYMLINK libspdk_bdev_aio.so 00:07:17.105 CC module/bdev/raid/raid5f.o 00:07:17.105 CC module/bdev/virtio/bdev_virtio_scsi.o 00:07:17.105 SO libspdk_bdev_ftl.so.6.0 00:07:17.105 SYMLINK libspdk_bdev_ftl.so 00:07:17.105 CC module/bdev/virtio/bdev_virtio_blk.o 00:07:17.380 CC module/bdev/virtio/bdev_virtio_rpc.o 00:07:17.380 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:07:17.664 LIB libspdk_bdev_iscsi.a 00:07:17.664 SO libspdk_bdev_iscsi.so.6.0 00:07:17.664 SYMLINK libspdk_bdev_iscsi.so 00:07:17.664 LIB libspdk_bdev_raid.a 00:07:17.664 LIB libspdk_bdev_virtio.a 00:07:17.664 SO libspdk_bdev_raid.so.6.0 00:07:17.923 SO libspdk_bdev_virtio.so.6.0 00:07:17.923 SYMLINK libspdk_bdev_virtio.so 00:07:17.923 SYMLINK libspdk_bdev_raid.so 00:07:19.304 LIB libspdk_bdev_nvme.a 00:07:19.304 SO libspdk_bdev_nvme.so.7.1 00:07:19.563 SYMLINK libspdk_bdev_nvme.so 00:07:20.132 CC module/event/subsystems/iobuf/iobuf.o 00:07:20.132 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:07:20.132 CC module/event/subsystems/sock/sock.o 00:07:20.132 CC module/event/subsystems/vmd/vmd.o 00:07:20.132 CC module/event/subsystems/fsdev/fsdev.o 00:07:20.132 CC module/event/subsystems/vmd/vmd_rpc.o 00:07:20.132 CC module/event/subsystems/keyring/keyring.o 00:07:20.132 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:07:20.132 CC module/event/subsystems/scheduler/scheduler.o 00:07:20.132 LIB libspdk_event_keyring.a 00:07:20.132 LIB libspdk_event_fsdev.a 00:07:20.132 LIB libspdk_event_vmd.a 00:07:20.132 LIB libspdk_event_vhost_blk.a 00:07:20.132 LIB libspdk_event_iobuf.a 00:07:20.132 SO libspdk_event_keyring.so.1.0 00:07:20.132 LIB libspdk_event_sock.a 00:07:20.132 SO libspdk_event_fsdev.so.1.0 00:07:20.132 SO libspdk_event_vhost_blk.so.3.0 00:07:20.132 SO libspdk_event_vmd.so.6.0 00:07:20.132 LIB libspdk_event_scheduler.a 00:07:20.132 SO libspdk_event_sock.so.5.0 00:07:20.132 SO libspdk_event_iobuf.so.3.0 00:07:20.392 SO libspdk_event_scheduler.so.4.0 00:07:20.392 SYMLINK libspdk_event_keyring.so 00:07:20.392 SYMLINK libspdk_event_vhost_blk.so 00:07:20.392 SYMLINK libspdk_event_sock.so 00:07:20.392 SYMLINK libspdk_event_fsdev.so 00:07:20.392 SYMLINK libspdk_event_vmd.so 00:07:20.392 SYMLINK libspdk_event_iobuf.so 00:07:20.392 SYMLINK libspdk_event_scheduler.so 00:07:20.652 CC module/event/subsystems/accel/accel.o 00:07:20.912 LIB libspdk_event_accel.a 00:07:20.912 SO libspdk_event_accel.so.6.0 00:07:20.912 SYMLINK libspdk_event_accel.so 00:07:21.481 CC module/event/subsystems/bdev/bdev.o 00:07:21.481 LIB libspdk_event_bdev.a 00:07:21.481 SO libspdk_event_bdev.so.6.0 00:07:21.739 SYMLINK libspdk_event_bdev.so 00:07:21.997 CC module/event/subsystems/scsi/scsi.o 00:07:21.997 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:07:21.997 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:07:21.997 CC module/event/subsystems/ublk/ublk.o 00:07:21.997 CC module/event/subsystems/nbd/nbd.o 00:07:22.256 LIB libspdk_event_nbd.a 00:07:22.256 LIB libspdk_event_scsi.a 00:07:22.256 LIB libspdk_event_ublk.a 00:07:22.256 SO libspdk_event_nbd.so.6.0 00:07:22.256 SO libspdk_event_ublk.so.3.0 00:07:22.256 SO libspdk_event_scsi.so.6.0 00:07:22.256 SYMLINK libspdk_event_ublk.so 00:07:22.256 SYMLINK libspdk_event_nbd.so 00:07:22.256 SYMLINK libspdk_event_scsi.so 00:07:22.256 LIB libspdk_event_nvmf.a 00:07:22.256 SO libspdk_event_nvmf.so.6.0 00:07:22.256 SYMLINK libspdk_event_nvmf.so 00:07:22.515 CC module/event/subsystems/iscsi/iscsi.o 00:07:22.515 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:07:22.774 LIB libspdk_event_vhost_scsi.a 00:07:22.774 LIB libspdk_event_iscsi.a 00:07:22.774 SO libspdk_event_vhost_scsi.so.3.0 00:07:22.774 SO libspdk_event_iscsi.so.6.0 00:07:22.774 SYMLINK libspdk_event_vhost_scsi.so 00:07:22.774 SYMLINK libspdk_event_iscsi.so 00:07:23.033 SO libspdk.so.6.0 00:07:23.033 SYMLINK libspdk.so 00:07:23.291 CC test/rpc_client/rpc_client_test.o 00:07:23.291 TEST_HEADER include/spdk/accel.h 00:07:23.291 TEST_HEADER include/spdk/accel_module.h 00:07:23.291 CXX app/trace/trace.o 00:07:23.291 TEST_HEADER include/spdk/assert.h 00:07:23.291 CC app/trace_record/trace_record.o 00:07:23.291 TEST_HEADER include/spdk/barrier.h 00:07:23.291 TEST_HEADER include/spdk/base64.h 00:07:23.291 TEST_HEADER include/spdk/bdev.h 00:07:23.291 TEST_HEADER include/spdk/bdev_module.h 00:07:23.291 TEST_HEADER include/spdk/bdev_zone.h 00:07:23.291 TEST_HEADER include/spdk/bit_array.h 00:07:23.291 TEST_HEADER include/spdk/bit_pool.h 00:07:23.291 TEST_HEADER include/spdk/blob_bdev.h 00:07:23.291 TEST_HEADER include/spdk/blobfs_bdev.h 00:07:23.291 TEST_HEADER include/spdk/blobfs.h 00:07:23.291 TEST_HEADER include/spdk/blob.h 00:07:23.291 TEST_HEADER include/spdk/conf.h 00:07:23.291 TEST_HEADER include/spdk/config.h 00:07:23.291 TEST_HEADER include/spdk/cpuset.h 00:07:23.291 TEST_HEADER include/spdk/crc16.h 00:07:23.291 TEST_HEADER include/spdk/crc32.h 00:07:23.291 TEST_HEADER include/spdk/crc64.h 00:07:23.291 TEST_HEADER include/spdk/dif.h 00:07:23.291 TEST_HEADER include/spdk/dma.h 00:07:23.291 TEST_HEADER include/spdk/endian.h 00:07:23.291 TEST_HEADER include/spdk/env_dpdk.h 00:07:23.291 TEST_HEADER include/spdk/env.h 00:07:23.291 TEST_HEADER include/spdk/event.h 00:07:23.291 TEST_HEADER include/spdk/fd_group.h 00:07:23.550 TEST_HEADER include/spdk/fd.h 00:07:23.550 TEST_HEADER include/spdk/file.h 00:07:23.550 TEST_HEADER include/spdk/fsdev.h 00:07:23.550 TEST_HEADER include/spdk/fsdev_module.h 00:07:23.550 TEST_HEADER include/spdk/ftl.h 00:07:23.550 TEST_HEADER include/spdk/fuse_dispatcher.h 00:07:23.550 TEST_HEADER include/spdk/gpt_spec.h 00:07:23.550 TEST_HEADER include/spdk/hexlify.h 00:07:23.550 TEST_HEADER include/spdk/histogram_data.h 00:07:23.550 TEST_HEADER include/spdk/idxd.h 00:07:23.550 CC examples/util/zipf/zipf.o 00:07:23.550 CC examples/ioat/perf/perf.o 00:07:23.550 TEST_HEADER include/spdk/idxd_spec.h 00:07:23.550 TEST_HEADER include/spdk/init.h 00:07:23.550 TEST_HEADER include/spdk/ioat.h 00:07:23.550 TEST_HEADER include/spdk/ioat_spec.h 00:07:23.550 TEST_HEADER include/spdk/iscsi_spec.h 00:07:23.550 TEST_HEADER include/spdk/json.h 00:07:23.550 TEST_HEADER include/spdk/jsonrpc.h 00:07:23.550 TEST_HEADER include/spdk/keyring.h 00:07:23.550 TEST_HEADER include/spdk/keyring_module.h 00:07:23.550 TEST_HEADER include/spdk/likely.h 00:07:23.550 CC test/thread/poller_perf/poller_perf.o 00:07:23.550 TEST_HEADER include/spdk/log.h 00:07:23.550 TEST_HEADER include/spdk/lvol.h 00:07:23.550 TEST_HEADER include/spdk/md5.h 00:07:23.550 TEST_HEADER include/spdk/memory.h 00:07:23.550 TEST_HEADER include/spdk/mmio.h 00:07:23.550 TEST_HEADER include/spdk/nbd.h 00:07:23.550 CC test/app/bdev_svc/bdev_svc.o 00:07:23.550 TEST_HEADER include/spdk/net.h 00:07:23.550 TEST_HEADER include/spdk/notify.h 00:07:23.550 TEST_HEADER include/spdk/nvme.h 00:07:23.550 TEST_HEADER include/spdk/nvme_intel.h 00:07:23.550 TEST_HEADER include/spdk/nvme_ocssd.h 00:07:23.550 CC test/dma/test_dma/test_dma.o 00:07:23.550 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:07:23.550 TEST_HEADER include/spdk/nvme_spec.h 00:07:23.550 TEST_HEADER include/spdk/nvme_zns.h 00:07:23.550 TEST_HEADER include/spdk/nvmf_cmd.h 00:07:23.550 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:07:23.550 TEST_HEADER include/spdk/nvmf.h 00:07:23.550 TEST_HEADER include/spdk/nvmf_spec.h 00:07:23.550 TEST_HEADER include/spdk/nvmf_transport.h 00:07:23.550 TEST_HEADER include/spdk/opal.h 00:07:23.550 TEST_HEADER include/spdk/opal_spec.h 00:07:23.550 TEST_HEADER include/spdk/pci_ids.h 00:07:23.550 TEST_HEADER include/spdk/pipe.h 00:07:23.550 TEST_HEADER include/spdk/queue.h 00:07:23.550 TEST_HEADER include/spdk/reduce.h 00:07:23.550 TEST_HEADER include/spdk/rpc.h 00:07:23.550 TEST_HEADER include/spdk/scheduler.h 00:07:23.550 TEST_HEADER include/spdk/scsi.h 00:07:23.550 TEST_HEADER include/spdk/scsi_spec.h 00:07:23.550 TEST_HEADER include/spdk/sock.h 00:07:23.550 TEST_HEADER include/spdk/stdinc.h 00:07:23.550 TEST_HEADER include/spdk/string.h 00:07:23.550 TEST_HEADER include/spdk/thread.h 00:07:23.550 TEST_HEADER include/spdk/trace.h 00:07:23.550 CC test/env/mem_callbacks/mem_callbacks.o 00:07:23.550 TEST_HEADER include/spdk/trace_parser.h 00:07:23.550 TEST_HEADER include/spdk/tree.h 00:07:23.550 TEST_HEADER include/spdk/ublk.h 00:07:23.550 TEST_HEADER include/spdk/util.h 00:07:23.550 TEST_HEADER include/spdk/uuid.h 00:07:23.550 TEST_HEADER include/spdk/version.h 00:07:23.550 TEST_HEADER include/spdk/vfio_user_pci.h 00:07:23.550 TEST_HEADER include/spdk/vfio_user_spec.h 00:07:23.550 TEST_HEADER include/spdk/vhost.h 00:07:23.550 TEST_HEADER include/spdk/vmd.h 00:07:23.550 TEST_HEADER include/spdk/xor.h 00:07:23.550 TEST_HEADER include/spdk/zipf.h 00:07:23.550 CXX test/cpp_headers/accel.o 00:07:23.550 LINK rpc_client_test 00:07:23.550 LINK poller_perf 00:07:23.550 LINK zipf 00:07:23.550 LINK bdev_svc 00:07:23.550 LINK spdk_trace_record 00:07:23.808 LINK ioat_perf 00:07:23.808 CXX test/cpp_headers/accel_module.o 00:07:23.808 LINK spdk_trace 00:07:23.808 CC test/env/vtophys/vtophys.o 00:07:23.808 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:07:23.808 CXX test/cpp_headers/assert.o 00:07:23.808 CC app/nvmf_tgt/nvmf_main.o 00:07:23.808 CC examples/ioat/verify/verify.o 00:07:24.066 CC examples/interrupt_tgt/interrupt_tgt.o 00:07:24.066 LINK vtophys 00:07:24.066 CXX test/cpp_headers/barrier.o 00:07:24.066 LINK env_dpdk_post_init 00:07:24.066 LINK test_dma 00:07:24.066 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:07:24.066 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:07:24.066 LINK mem_callbacks 00:07:24.066 LINK nvmf_tgt 00:07:24.066 LINK interrupt_tgt 00:07:24.066 LINK verify 00:07:24.324 CXX test/cpp_headers/base64.o 00:07:24.324 CC app/iscsi_tgt/iscsi_tgt.o 00:07:24.324 CC test/env/memory/memory_ut.o 00:07:24.324 CC app/spdk_tgt/spdk_tgt.o 00:07:24.324 CC app/spdk_lspci/spdk_lspci.o 00:07:24.324 CXX test/cpp_headers/bdev.o 00:07:24.324 CC app/spdk_nvme_perf/perf.o 00:07:24.324 CC app/spdk_nvme_identify/identify.o 00:07:24.583 LINK iscsi_tgt 00:07:24.583 LINK spdk_lspci 00:07:24.583 CC examples/thread/thread/thread_ex.o 00:07:24.583 LINK nvme_fuzz 00:07:24.583 CXX test/cpp_headers/bdev_module.o 00:07:24.583 LINK spdk_tgt 00:07:24.583 CXX test/cpp_headers/bdev_zone.o 00:07:24.841 CXX test/cpp_headers/bit_array.o 00:07:24.841 LINK thread 00:07:24.841 CC examples/sock/hello_world/hello_sock.o 00:07:24.841 CXX test/cpp_headers/bit_pool.o 00:07:24.841 CC examples/vmd/lsvmd/lsvmd.o 00:07:24.841 CC examples/idxd/perf/perf.o 00:07:25.150 LINK lsvmd 00:07:25.150 CXX test/cpp_headers/blob_bdev.o 00:07:25.150 CC app/spdk_nvme_discover/discovery_aer.o 00:07:25.150 CC test/env/pci/pci_ut.o 00:07:25.150 LINK hello_sock 00:07:25.439 CXX test/cpp_headers/blobfs_bdev.o 00:07:25.439 CC examples/vmd/led/led.o 00:07:25.439 LINK idxd_perf 00:07:25.439 LINK spdk_nvme_discover 00:07:25.439 LINK spdk_nvme_perf 00:07:25.439 LINK spdk_nvme_identify 00:07:25.439 CC app/spdk_top/spdk_top.o 00:07:25.439 CXX test/cpp_headers/blobfs.o 00:07:25.439 LINK led 00:07:25.697 LINK memory_ut 00:07:25.697 LINK pci_ut 00:07:25.697 CXX test/cpp_headers/blob.o 00:07:25.697 CC app/vhost/vhost.o 00:07:25.697 CC examples/accel/perf/accel_perf.o 00:07:25.697 CC app/spdk_dd/spdk_dd.o 00:07:25.697 CXX test/cpp_headers/conf.o 00:07:25.697 CC examples/blob/hello_world/hello_blob.o 00:07:25.955 CC examples/nvme/hello_world/hello_world.o 00:07:25.955 LINK vhost 00:07:25.955 CXX test/cpp_headers/config.o 00:07:25.955 CC test/app/histogram_perf/histogram_perf.o 00:07:25.955 CXX test/cpp_headers/cpuset.o 00:07:25.955 CC examples/fsdev/hello_world/hello_fsdev.o 00:07:25.955 LINK hello_blob 00:07:26.214 LINK hello_world 00:07:26.214 LINK histogram_perf 00:07:26.214 CXX test/cpp_headers/crc16.o 00:07:26.214 LINK spdk_dd 00:07:26.214 LINK iscsi_fuzz 00:07:26.214 CC app/fio/nvme/fio_plugin.o 00:07:26.214 LINK hello_fsdev 00:07:26.214 CXX test/cpp_headers/crc32.o 00:07:26.214 LINK accel_perf 00:07:26.472 CC examples/nvme/reconnect/reconnect.o 00:07:26.472 CC examples/blob/cli/blobcli.o 00:07:26.472 CC app/fio/bdev/fio_plugin.o 00:07:26.472 CXX test/cpp_headers/crc64.o 00:07:26.472 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:07:26.472 LINK spdk_top 00:07:26.472 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:07:26.731 CC test/event/event_perf/event_perf.o 00:07:26.731 CXX test/cpp_headers/dif.o 00:07:26.731 CC test/nvme/aer/aer.o 00:07:26.731 CC test/nvme/reset/reset.o 00:07:26.731 CC test/app/jsoncat/jsoncat.o 00:07:26.731 LINK event_perf 00:07:26.731 LINK reconnect 00:07:26.731 CXX test/cpp_headers/dma.o 00:07:26.990 LINK spdk_nvme 00:07:26.990 LINK jsoncat 00:07:26.990 LINK spdk_bdev 00:07:26.990 CXX test/cpp_headers/endian.o 00:07:26.990 LINK vhost_fuzz 00:07:26.990 CC test/event/reactor/reactor.o 00:07:26.990 LINK aer 00:07:26.990 LINK reset 00:07:26.990 CC examples/nvme/nvme_manage/nvme_manage.o 00:07:26.990 CC test/app/stub/stub.o 00:07:26.990 LINK blobcli 00:07:27.250 CXX test/cpp_headers/env_dpdk.o 00:07:27.250 LINK reactor 00:07:27.250 CC test/accel/dif/dif.o 00:07:27.250 LINK stub 00:07:27.250 CC examples/bdev/hello_world/hello_bdev.o 00:07:27.250 CC test/nvme/sgl/sgl.o 00:07:27.250 CC examples/bdev/bdevperf/bdevperf.o 00:07:27.250 CXX test/cpp_headers/env.o 00:07:27.250 CC test/blobfs/mkfs/mkfs.o 00:07:27.250 CC test/nvme/e2edp/nvme_dp.o 00:07:27.510 CC test/event/reactor_perf/reactor_perf.o 00:07:27.510 CXX test/cpp_headers/event.o 00:07:27.510 LINK hello_bdev 00:07:27.510 CC test/nvme/overhead/overhead.o 00:07:27.510 LINK reactor_perf 00:07:27.510 LINK mkfs 00:07:27.510 LINK sgl 00:07:27.510 CXX test/cpp_headers/fd_group.o 00:07:27.768 LINK nvme_manage 00:07:27.768 LINK nvme_dp 00:07:27.768 CXX test/cpp_headers/fd.o 00:07:27.768 CXX test/cpp_headers/file.o 00:07:27.768 CC test/event/app_repeat/app_repeat.o 00:07:27.768 LINK overhead 00:07:28.027 CC test/nvme/err_injection/err_injection.o 00:07:28.027 CXX test/cpp_headers/fsdev.o 00:07:28.027 CC examples/nvme/arbitration/arbitration.o 00:07:28.027 CXX test/cpp_headers/fsdev_module.o 00:07:28.027 CC test/nvme/startup/startup.o 00:07:28.027 LINK app_repeat 00:07:28.027 CC test/lvol/esnap/esnap.o 00:07:28.027 LINK dif 00:07:28.027 LINK err_injection 00:07:28.027 CXX test/cpp_headers/ftl.o 00:07:28.286 CC test/nvme/reserve/reserve.o 00:07:28.286 LINK startup 00:07:28.286 CC test/event/scheduler/scheduler.o 00:07:28.286 CC examples/nvme/hotplug/hotplug.o 00:07:28.286 LINK bdevperf 00:07:28.286 LINK arbitration 00:07:28.286 CXX test/cpp_headers/fuse_dispatcher.o 00:07:28.286 CC test/nvme/simple_copy/simple_copy.o 00:07:28.286 CC examples/nvme/cmb_copy/cmb_copy.o 00:07:28.286 LINK reserve 00:07:28.545 CC test/nvme/connect_stress/connect_stress.o 00:07:28.545 LINK scheduler 00:07:28.545 CXX test/cpp_headers/gpt_spec.o 00:07:28.545 LINK hotplug 00:07:28.545 CC examples/nvme/abort/abort.o 00:07:28.545 LINK cmb_copy 00:07:28.545 CC test/nvme/boot_partition/boot_partition.o 00:07:28.545 LINK connect_stress 00:07:28.545 LINK simple_copy 00:07:28.545 CXX test/cpp_headers/hexlify.o 00:07:28.804 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:07:28.804 CC test/nvme/compliance/nvme_compliance.o 00:07:28.804 LINK boot_partition 00:07:28.804 CC test/bdev/bdevio/bdevio.o 00:07:28.804 CXX test/cpp_headers/histogram_data.o 00:07:28.804 CXX test/cpp_headers/idxd.o 00:07:28.804 CC test/nvme/fused_ordering/fused_ordering.o 00:07:28.804 CC test/nvme/doorbell_aers/doorbell_aers.o 00:07:28.804 LINK pmr_persistence 00:07:29.063 CXX test/cpp_headers/idxd_spec.o 00:07:29.063 CXX test/cpp_headers/init.o 00:07:29.063 LINK abort 00:07:29.063 CC test/nvme/fdp/fdp.o 00:07:29.063 LINK fused_ordering 00:07:29.063 LINK doorbell_aers 00:07:29.063 LINK nvme_compliance 00:07:29.063 CXX test/cpp_headers/ioat.o 00:07:29.063 CXX test/cpp_headers/ioat_spec.o 00:07:29.063 CC test/nvme/cuse/cuse.o 00:07:29.323 LINK bdevio 00:07:29.323 CXX test/cpp_headers/iscsi_spec.o 00:07:29.323 CXX test/cpp_headers/json.o 00:07:29.323 CXX test/cpp_headers/jsonrpc.o 00:07:29.323 CXX test/cpp_headers/keyring.o 00:07:29.323 CXX test/cpp_headers/keyring_module.o 00:07:29.323 CC examples/nvmf/nvmf/nvmf.o 00:07:29.323 CXX test/cpp_headers/likely.o 00:07:29.323 CXX test/cpp_headers/log.o 00:07:29.323 CXX test/cpp_headers/lvol.o 00:07:29.323 LINK fdp 00:07:29.583 CXX test/cpp_headers/md5.o 00:07:29.583 CXX test/cpp_headers/memory.o 00:07:29.583 CXX test/cpp_headers/mmio.o 00:07:29.583 CXX test/cpp_headers/nbd.o 00:07:29.583 CXX test/cpp_headers/net.o 00:07:29.583 CXX test/cpp_headers/notify.o 00:07:29.583 CXX test/cpp_headers/nvme.o 00:07:29.583 CXX test/cpp_headers/nvme_intel.o 00:07:29.583 CXX test/cpp_headers/nvme_ocssd.o 00:07:29.583 CXX test/cpp_headers/nvme_ocssd_spec.o 00:07:29.583 CXX test/cpp_headers/nvme_spec.o 00:07:29.842 LINK nvmf 00:07:29.842 CXX test/cpp_headers/nvme_zns.o 00:07:29.842 CXX test/cpp_headers/nvmf_cmd.o 00:07:29.842 CXX test/cpp_headers/nvmf_fc_spec.o 00:07:29.842 CXX test/cpp_headers/nvmf.o 00:07:29.842 CXX test/cpp_headers/nvmf_spec.o 00:07:29.842 CXX test/cpp_headers/nvmf_transport.o 00:07:29.842 CXX test/cpp_headers/opal.o 00:07:29.842 CXX test/cpp_headers/opal_spec.o 00:07:29.842 CXX test/cpp_headers/pci_ids.o 00:07:29.842 CXX test/cpp_headers/pipe.o 00:07:30.101 CXX test/cpp_headers/queue.o 00:07:30.101 CXX test/cpp_headers/reduce.o 00:07:30.101 CXX test/cpp_headers/rpc.o 00:07:30.101 CXX test/cpp_headers/scheduler.o 00:07:30.101 CXX test/cpp_headers/scsi.o 00:07:30.101 CXX test/cpp_headers/scsi_spec.o 00:07:30.101 CXX test/cpp_headers/sock.o 00:07:30.101 CXX test/cpp_headers/stdinc.o 00:07:30.101 CXX test/cpp_headers/string.o 00:07:30.101 CXX test/cpp_headers/thread.o 00:07:30.101 CXX test/cpp_headers/trace.o 00:07:30.101 CXX test/cpp_headers/trace_parser.o 00:07:30.101 CXX test/cpp_headers/tree.o 00:07:30.101 CXX test/cpp_headers/ublk.o 00:07:30.360 CXX test/cpp_headers/util.o 00:07:30.360 CXX test/cpp_headers/uuid.o 00:07:30.360 CXX test/cpp_headers/version.o 00:07:30.360 CXX test/cpp_headers/vfio_user_pci.o 00:07:30.360 CXX test/cpp_headers/vfio_user_spec.o 00:07:30.360 CXX test/cpp_headers/vhost.o 00:07:30.360 CXX test/cpp_headers/vmd.o 00:07:30.360 CXX test/cpp_headers/xor.o 00:07:30.360 CXX test/cpp_headers/zipf.o 00:07:30.619 LINK cuse 00:07:33.918 LINK esnap 00:07:34.548 00:07:34.548 real 1m40.725s 00:07:34.548 user 8m21.533s 00:07:34.548 sys 1m21.629s 00:07:34.548 16:23:16 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:07:34.548 ************************************ 00:07:34.548 END TEST make 00:07:34.548 ************************************ 00:07:34.548 16:23:16 make -- common/autotest_common.sh@10 -- $ set +x 00:07:34.548 16:23:16 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:07:34.548 16:23:16 -- pm/common@29 -- $ signal_monitor_resources TERM 00:07:34.548 16:23:16 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:07:34.548 16:23:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:34.548 16:23:16 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:07:34.548 16:23:16 -- pm/common@44 -- $ pid=6203 00:07:34.548 16:23:16 -- pm/common@50 -- $ kill -TERM 6203 00:07:34.548 16:23:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:34.548 16:23:16 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:07:34.548 16:23:16 -- pm/common@44 -- $ pid=6205 00:07:34.548 16:23:16 -- pm/common@50 -- $ kill -TERM 6205 00:07:34.548 16:23:16 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:07:34.548 16:23:16 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:07:34.548 16:23:16 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:34.548 16:23:16 -- common/autotest_common.sh@1711 -- # lcov --version 00:07:34.548 16:23:16 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:34.548 16:23:16 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:34.548 16:23:16 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:34.548 16:23:16 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:34.548 16:23:16 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:34.548 16:23:16 -- scripts/common.sh@336 -- # IFS=.-: 00:07:34.548 16:23:16 -- scripts/common.sh@336 -- # read -ra ver1 00:07:34.548 16:23:16 -- scripts/common.sh@337 -- # IFS=.-: 00:07:34.548 16:23:16 -- scripts/common.sh@337 -- # read -ra ver2 00:07:34.548 16:23:16 -- scripts/common.sh@338 -- # local 'op=<' 00:07:34.548 16:23:16 -- scripts/common.sh@340 -- # ver1_l=2 00:07:34.548 16:23:16 -- scripts/common.sh@341 -- # ver2_l=1 00:07:34.548 16:23:16 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:34.548 16:23:16 -- scripts/common.sh@344 -- # case "$op" in 00:07:34.548 16:23:16 -- scripts/common.sh@345 -- # : 1 00:07:34.548 16:23:16 -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:34.548 16:23:16 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:34.548 16:23:16 -- scripts/common.sh@365 -- # decimal 1 00:07:34.548 16:23:16 -- scripts/common.sh@353 -- # local d=1 00:07:34.548 16:23:16 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:34.548 16:23:16 -- scripts/common.sh@355 -- # echo 1 00:07:34.548 16:23:16 -- scripts/common.sh@365 -- # ver1[v]=1 00:07:34.548 16:23:16 -- scripts/common.sh@366 -- # decimal 2 00:07:34.548 16:23:16 -- scripts/common.sh@353 -- # local d=2 00:07:34.548 16:23:16 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:34.548 16:23:16 -- scripts/common.sh@355 -- # echo 2 00:07:34.548 16:23:16 -- scripts/common.sh@366 -- # ver2[v]=2 00:07:34.548 16:23:16 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:34.548 16:23:16 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:34.548 16:23:16 -- scripts/common.sh@368 -- # return 0 00:07:34.548 16:23:16 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:34.548 16:23:16 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:34.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.548 --rc genhtml_branch_coverage=1 00:07:34.548 --rc genhtml_function_coverage=1 00:07:34.548 --rc genhtml_legend=1 00:07:34.548 --rc geninfo_all_blocks=1 00:07:34.548 --rc geninfo_unexecuted_blocks=1 00:07:34.548 00:07:34.548 ' 00:07:34.548 16:23:16 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:34.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.548 --rc genhtml_branch_coverage=1 00:07:34.548 --rc genhtml_function_coverage=1 00:07:34.548 --rc genhtml_legend=1 00:07:34.548 --rc geninfo_all_blocks=1 00:07:34.548 --rc geninfo_unexecuted_blocks=1 00:07:34.548 00:07:34.548 ' 00:07:34.548 16:23:16 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:34.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.548 --rc genhtml_branch_coverage=1 00:07:34.548 --rc genhtml_function_coverage=1 00:07:34.548 --rc genhtml_legend=1 00:07:34.548 --rc geninfo_all_blocks=1 00:07:34.548 --rc geninfo_unexecuted_blocks=1 00:07:34.548 00:07:34.548 ' 00:07:34.548 16:23:16 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:34.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.548 --rc genhtml_branch_coverage=1 00:07:34.548 --rc genhtml_function_coverage=1 00:07:34.548 --rc genhtml_legend=1 00:07:34.548 --rc geninfo_all_blocks=1 00:07:34.548 --rc geninfo_unexecuted_blocks=1 00:07:34.548 00:07:34.548 ' 00:07:34.548 16:23:16 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:34.548 16:23:16 -- nvmf/common.sh@7 -- # uname -s 00:07:34.808 16:23:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:34.808 16:23:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:34.808 16:23:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:34.808 16:23:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:34.808 16:23:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:34.808 16:23:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:34.808 16:23:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:34.808 16:23:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:34.808 16:23:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:34.808 16:23:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:34.808 16:23:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4b60f70-3bfd-4379-bb78-1dcb5629a12f 00:07:34.808 16:23:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=b4b60f70-3bfd-4379-bb78-1dcb5629a12f 00:07:34.808 16:23:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:34.808 16:23:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:34.808 16:23:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:34.808 16:23:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:34.808 16:23:16 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:34.808 16:23:16 -- scripts/common.sh@15 -- # shopt -s extglob 00:07:34.808 16:23:16 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:34.808 16:23:16 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:34.808 16:23:16 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:34.808 16:23:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.808 16:23:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.808 16:23:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.808 16:23:16 -- paths/export.sh@5 -- # export PATH 00:07:34.808 16:23:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.808 16:23:16 -- nvmf/common.sh@51 -- # : 0 00:07:34.808 16:23:16 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:34.808 16:23:16 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:34.808 16:23:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:34.808 16:23:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:34.808 16:23:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:34.808 16:23:16 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:34.808 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:34.808 16:23:16 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:34.808 16:23:16 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:34.808 16:23:16 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:34.808 16:23:16 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:07:34.808 16:23:16 -- spdk/autotest.sh@32 -- # uname -s 00:07:34.808 16:23:16 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:07:34.808 16:23:16 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:07:34.808 16:23:16 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:07:34.808 16:23:16 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:07:34.808 16:23:16 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:07:34.808 16:23:16 -- spdk/autotest.sh@44 -- # modprobe nbd 00:07:34.808 16:23:16 -- spdk/autotest.sh@46 -- # type -P udevadm 00:07:34.808 16:23:16 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:07:34.808 16:23:16 -- spdk/autotest.sh@48 -- # udevadm_pid=67082 00:07:34.808 16:23:16 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:07:34.808 16:23:16 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:07:34.808 16:23:16 -- pm/common@17 -- # local monitor 00:07:34.808 16:23:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:34.808 16:23:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:34.808 16:23:16 -- pm/common@21 -- # date +%s 00:07:34.808 16:23:16 -- pm/common@25 -- # sleep 1 00:07:34.808 16:23:16 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733502196 00:07:34.808 16:23:16 -- pm/common@21 -- # date +%s 00:07:34.808 16:23:16 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733502196 00:07:34.808 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733502196_collect-vmstat.pm.log 00:07:34.808 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733502196_collect-cpu-load.pm.log 00:07:35.746 16:23:17 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:07:35.746 16:23:17 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:07:35.746 16:23:17 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:35.746 16:23:17 -- common/autotest_common.sh@10 -- # set +x 00:07:35.746 16:23:17 -- spdk/autotest.sh@59 -- # create_test_list 00:07:35.746 16:23:17 -- common/autotest_common.sh@752 -- # xtrace_disable 00:07:35.746 16:23:17 -- common/autotest_common.sh@10 -- # set +x 00:07:35.746 16:23:17 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:07:35.746 16:23:17 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:07:35.746 16:23:17 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:07:35.746 16:23:17 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:07:35.746 16:23:17 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:07:35.746 16:23:17 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:07:35.746 16:23:17 -- common/autotest_common.sh@1457 -- # uname 00:07:36.006 16:23:17 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:07:36.006 16:23:17 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:07:36.006 16:23:17 -- common/autotest_common.sh@1477 -- # uname 00:07:36.006 16:23:17 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:07:36.006 16:23:17 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:07:36.006 16:23:17 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:07:36.006 lcov: LCOV version 1.15 00:07:36.006 16:23:17 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:07:50.954 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:07:50.954 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:08:09.106 16:23:48 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:08:09.106 16:23:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:09.106 16:23:48 -- common/autotest_common.sh@10 -- # set +x 00:08:09.106 16:23:48 -- spdk/autotest.sh@78 -- # rm -f 00:08:09.106 16:23:48 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:09.106 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:09.106 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:08:09.106 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:08:09.106 16:23:49 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:08:09.106 16:23:49 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:08:09.106 16:23:49 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:08:09.106 16:23:49 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:08:09.106 16:23:49 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:08:09.106 16:23:49 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:08:09.106 16:23:49 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:08:09.106 16:23:49 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:08:09.106 16:23:49 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:08:09.106 16:23:49 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:08:09.106 16:23:49 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:08:09.106 16:23:49 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:09.106 16:23:49 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:09.106 16:23:49 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:08:09.106 16:23:49 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:08:09.106 16:23:49 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:08:09.106 16:23:49 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:08:09.106 16:23:49 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:08:09.106 16:23:49 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:09.106 16:23:49 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:09.106 16:23:49 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:08:09.106 16:23:49 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:08:09.106 16:23:49 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:08:09.106 16:23:49 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:08:09.106 16:23:49 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:09.106 16:23:49 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:08:09.106 16:23:49 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:08:09.106 16:23:49 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:08:09.106 16:23:49 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:08:09.106 16:23:49 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:09.106 16:23:49 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:08:09.106 16:23:49 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:09.106 16:23:49 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:09.106 16:23:49 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:08:09.106 16:23:49 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:08:09.106 16:23:49 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:08:09.106 No valid GPT data, bailing 00:08:09.106 16:23:49 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:08:09.106 16:23:49 -- scripts/common.sh@394 -- # pt= 00:08:09.106 16:23:49 -- scripts/common.sh@395 -- # return 1 00:08:09.106 16:23:49 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:08:09.106 1+0 records in 00:08:09.106 1+0 records out 00:08:09.106 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00647337 s, 162 MB/s 00:08:09.106 16:23:49 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:09.106 16:23:49 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:09.106 16:23:49 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:08:09.106 16:23:49 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:08:09.106 16:23:49 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:08:09.106 No valid GPT data, bailing 00:08:09.106 16:23:49 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:08:09.106 16:23:49 -- scripts/common.sh@394 -- # pt= 00:08:09.106 16:23:49 -- scripts/common.sh@395 -- # return 1 00:08:09.106 16:23:49 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:08:09.106 1+0 records in 00:08:09.106 1+0 records out 00:08:09.106 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00445598 s, 235 MB/s 00:08:09.106 16:23:49 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:09.106 16:23:49 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:09.106 16:23:49 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:08:09.106 16:23:49 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:08:09.106 16:23:49 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:08:09.106 No valid GPT data, bailing 00:08:09.106 16:23:49 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:08:09.106 16:23:49 -- scripts/common.sh@394 -- # pt= 00:08:09.106 16:23:49 -- scripts/common.sh@395 -- # return 1 00:08:09.106 16:23:49 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:08:09.107 1+0 records in 00:08:09.107 1+0 records out 00:08:09.107 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00467911 s, 224 MB/s 00:08:09.107 16:23:49 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:09.107 16:23:49 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:09.107 16:23:49 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:08:09.107 16:23:49 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:08:09.107 16:23:49 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:08:09.107 No valid GPT data, bailing 00:08:09.107 16:23:49 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:08:09.107 16:23:49 -- scripts/common.sh@394 -- # pt= 00:08:09.107 16:23:49 -- scripts/common.sh@395 -- # return 1 00:08:09.107 16:23:49 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:08:09.107 1+0 records in 00:08:09.107 1+0 records out 00:08:09.107 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00638277 s, 164 MB/s 00:08:09.107 16:23:49 -- spdk/autotest.sh@105 -- # sync 00:08:09.107 16:23:49 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:08:09.107 16:23:49 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:08:09.107 16:23:49 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:08:11.014 16:23:52 -- spdk/autotest.sh@111 -- # uname -s 00:08:11.014 16:23:52 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:08:11.014 16:23:52 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:08:11.014 16:23:52 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:08:11.582 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:11.582 Hugepages 00:08:11.582 node hugesize free / total 00:08:11.582 node0 1048576kB 0 / 0 00:08:11.582 node0 2048kB 0 / 0 00:08:11.582 00:08:11.582 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:11.582 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:08:11.840 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:08:11.840 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:08:11.840 16:23:53 -- spdk/autotest.sh@117 -- # uname -s 00:08:11.840 16:23:53 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:08:11.840 16:23:53 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:08:11.840 16:23:53 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:12.776 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:12.776 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:12.776 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:12.776 16:23:54 -- common/autotest_common.sh@1517 -- # sleep 1 00:08:14.151 16:23:55 -- common/autotest_common.sh@1518 -- # bdfs=() 00:08:14.151 16:23:55 -- common/autotest_common.sh@1518 -- # local bdfs 00:08:14.151 16:23:55 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:08:14.151 16:23:55 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:08:14.151 16:23:55 -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:14.151 16:23:55 -- common/autotest_common.sh@1498 -- # local bdfs 00:08:14.151 16:23:55 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:14.151 16:23:55 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:14.151 16:23:55 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:14.151 16:23:55 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:08:14.151 16:23:55 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:08:14.151 16:23:55 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:14.407 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:14.407 Waiting for block devices as requested 00:08:14.407 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:14.664 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:14.664 16:23:56 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:08:14.664 16:23:56 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:08:14.664 16:23:56 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:08:14.664 16:23:56 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:08:14.664 16:23:56 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:08:14.664 16:23:56 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:08:14.664 16:23:56 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:08:14.664 16:23:56 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:08:14.664 16:23:56 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:08:14.664 16:23:56 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:08:14.664 16:23:56 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:08:14.664 16:23:56 -- common/autotest_common.sh@1531 -- # grep oacs 00:08:14.664 16:23:56 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:08:14.664 16:23:56 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:08:14.664 16:23:56 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:08:14.664 16:23:56 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:08:14.664 16:23:56 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:08:14.664 16:23:56 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:08:14.664 16:23:56 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:08:14.664 16:23:56 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:08:14.664 16:23:56 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:08:14.664 16:23:56 -- common/autotest_common.sh@1543 -- # continue 00:08:14.664 16:23:56 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:08:14.664 16:23:56 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:08:14.664 16:23:56 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:08:14.664 16:23:56 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:08:14.664 16:23:56 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:08:14.664 16:23:56 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:08:14.664 16:23:56 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:08:14.664 16:23:56 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:08:14.664 16:23:56 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:08:14.664 16:23:56 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:08:14.664 16:23:56 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:08:14.664 16:23:56 -- common/autotest_common.sh@1531 -- # grep oacs 00:08:14.664 16:23:56 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:08:14.664 16:23:56 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:08:14.664 16:23:56 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:08:14.664 16:23:56 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:08:14.664 16:23:56 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:08:14.664 16:23:56 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:08:14.664 16:23:56 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:08:14.664 16:23:56 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:08:14.664 16:23:56 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:08:14.664 16:23:56 -- common/autotest_common.sh@1543 -- # continue 00:08:14.664 16:23:56 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:08:14.664 16:23:56 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:14.664 16:23:56 -- common/autotest_common.sh@10 -- # set +x 00:08:14.664 16:23:56 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:08:14.664 16:23:56 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:14.664 16:23:56 -- common/autotest_common.sh@10 -- # set +x 00:08:14.921 16:23:56 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:15.487 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:15.745 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:15.745 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:15.745 16:23:57 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:08:15.745 16:23:57 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:15.745 16:23:57 -- common/autotest_common.sh@10 -- # set +x 00:08:16.003 16:23:57 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:08:16.003 16:23:57 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:08:16.003 16:23:57 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:08:16.003 16:23:57 -- common/autotest_common.sh@1563 -- # bdfs=() 00:08:16.003 16:23:57 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:08:16.004 16:23:57 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:08:16.004 16:23:57 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:08:16.004 16:23:57 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:08:16.004 16:23:57 -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:16.004 16:23:57 -- common/autotest_common.sh@1498 -- # local bdfs 00:08:16.004 16:23:57 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:16.004 16:23:57 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:16.004 16:23:57 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:16.004 16:23:57 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:08:16.004 16:23:57 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:08:16.004 16:23:57 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:08:16.004 16:23:57 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:08:16.004 16:23:57 -- common/autotest_common.sh@1566 -- # device=0x0010 00:08:16.004 16:23:57 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:16.004 16:23:57 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:08:16.004 16:23:57 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:08:16.004 16:23:57 -- common/autotest_common.sh@1566 -- # device=0x0010 00:08:16.004 16:23:57 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:16.004 16:23:57 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:08:16.004 16:23:57 -- common/autotest_common.sh@1572 -- # return 0 00:08:16.004 16:23:57 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:08:16.004 16:23:57 -- common/autotest_common.sh@1580 -- # return 0 00:08:16.004 16:23:57 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:08:16.004 16:23:57 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:08:16.004 16:23:57 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:16.004 16:23:57 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:16.004 16:23:57 -- spdk/autotest.sh@149 -- # timing_enter lib 00:08:16.004 16:23:57 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:16.004 16:23:57 -- common/autotest_common.sh@10 -- # set +x 00:08:16.004 16:23:57 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:08:16.004 16:23:57 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:16.004 16:23:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:16.004 16:23:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.004 16:23:57 -- common/autotest_common.sh@10 -- # set +x 00:08:16.004 ************************************ 00:08:16.004 START TEST env 00:08:16.004 ************************************ 00:08:16.004 16:23:57 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:16.004 * Looking for test storage... 00:08:16.263 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:08:16.263 16:23:57 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:16.263 16:23:57 env -- common/autotest_common.sh@1711 -- # lcov --version 00:08:16.263 16:23:57 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:16.263 16:23:57 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:16.263 16:23:57 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:16.263 16:23:57 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:16.263 16:23:57 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:16.263 16:23:57 env -- scripts/common.sh@336 -- # IFS=.-: 00:08:16.263 16:23:57 env -- scripts/common.sh@336 -- # read -ra ver1 00:08:16.263 16:23:57 env -- scripts/common.sh@337 -- # IFS=.-: 00:08:16.263 16:23:57 env -- scripts/common.sh@337 -- # read -ra ver2 00:08:16.263 16:23:57 env -- scripts/common.sh@338 -- # local 'op=<' 00:08:16.263 16:23:57 env -- scripts/common.sh@340 -- # ver1_l=2 00:08:16.263 16:23:57 env -- scripts/common.sh@341 -- # ver2_l=1 00:08:16.263 16:23:57 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:16.263 16:23:57 env -- scripts/common.sh@344 -- # case "$op" in 00:08:16.263 16:23:57 env -- scripts/common.sh@345 -- # : 1 00:08:16.263 16:23:57 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:16.263 16:23:57 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:16.263 16:23:57 env -- scripts/common.sh@365 -- # decimal 1 00:08:16.263 16:23:57 env -- scripts/common.sh@353 -- # local d=1 00:08:16.263 16:23:57 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:16.263 16:23:57 env -- scripts/common.sh@355 -- # echo 1 00:08:16.263 16:23:57 env -- scripts/common.sh@365 -- # ver1[v]=1 00:08:16.263 16:23:57 env -- scripts/common.sh@366 -- # decimal 2 00:08:16.263 16:23:57 env -- scripts/common.sh@353 -- # local d=2 00:08:16.263 16:23:57 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:16.263 16:23:57 env -- scripts/common.sh@355 -- # echo 2 00:08:16.263 16:23:57 env -- scripts/common.sh@366 -- # ver2[v]=2 00:08:16.263 16:23:57 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:16.263 16:23:57 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:16.263 16:23:57 env -- scripts/common.sh@368 -- # return 0 00:08:16.263 16:23:57 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:16.263 16:23:57 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:16.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.263 --rc genhtml_branch_coverage=1 00:08:16.263 --rc genhtml_function_coverage=1 00:08:16.263 --rc genhtml_legend=1 00:08:16.263 --rc geninfo_all_blocks=1 00:08:16.263 --rc geninfo_unexecuted_blocks=1 00:08:16.263 00:08:16.263 ' 00:08:16.263 16:23:57 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:16.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.263 --rc genhtml_branch_coverage=1 00:08:16.263 --rc genhtml_function_coverage=1 00:08:16.263 --rc genhtml_legend=1 00:08:16.263 --rc geninfo_all_blocks=1 00:08:16.263 --rc geninfo_unexecuted_blocks=1 00:08:16.263 00:08:16.263 ' 00:08:16.263 16:23:57 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:16.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.263 --rc genhtml_branch_coverage=1 00:08:16.263 --rc genhtml_function_coverage=1 00:08:16.263 --rc genhtml_legend=1 00:08:16.263 --rc geninfo_all_blocks=1 00:08:16.263 --rc geninfo_unexecuted_blocks=1 00:08:16.263 00:08:16.263 ' 00:08:16.263 16:23:57 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:16.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.263 --rc genhtml_branch_coverage=1 00:08:16.263 --rc genhtml_function_coverage=1 00:08:16.263 --rc genhtml_legend=1 00:08:16.263 --rc geninfo_all_blocks=1 00:08:16.263 --rc geninfo_unexecuted_blocks=1 00:08:16.263 00:08:16.263 ' 00:08:16.263 16:23:57 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:16.263 16:23:57 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:16.263 16:23:57 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.263 16:23:57 env -- common/autotest_common.sh@10 -- # set +x 00:08:16.263 ************************************ 00:08:16.263 START TEST env_memory 00:08:16.263 ************************************ 00:08:16.263 16:23:57 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:16.263 00:08:16.263 00:08:16.263 CUnit - A unit testing framework for C - Version 2.1-3 00:08:16.263 http://cunit.sourceforge.net/ 00:08:16.263 00:08:16.263 00:08:16.263 Suite: memory 00:08:16.263 Test: alloc and free memory map ...[2024-12-06 16:23:58.014524] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:16.263 passed 00:08:16.263 Test: mem map translation ...[2024-12-06 16:23:58.067085] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:16.263 [2024-12-06 16:23:58.067243] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:16.263 [2024-12-06 16:23:58.067378] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:16.263 [2024-12-06 16:23:58.067440] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:16.523 passed 00:08:16.523 Test: mem map registration ...[2024-12-06 16:23:58.146640] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:08:16.523 [2024-12-06 16:23:58.146723] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:08:16.523 passed 00:08:16.523 Test: mem map adjacent registrations ...passed 00:08:16.523 00:08:16.523 Run Summary: Type Total Ran Passed Failed Inactive 00:08:16.523 suites 1 1 n/a 0 0 00:08:16.523 tests 4 4 4 0 0 00:08:16.523 asserts 152 152 152 0 n/a 00:08:16.523 00:08:16.523 Elapsed time = 0.282 seconds 00:08:16.523 00:08:16.523 real 0m0.315s 00:08:16.523 user 0m0.284s 00:08:16.523 sys 0m0.022s 00:08:16.523 16:23:58 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.523 ************************************ 00:08:16.523 END TEST env_memory 00:08:16.523 ************************************ 00:08:16.523 16:23:58 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:08:16.523 16:23:58 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:16.523 16:23:58 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:16.523 16:23:58 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.523 16:23:58 env -- common/autotest_common.sh@10 -- # set +x 00:08:16.523 ************************************ 00:08:16.523 START TEST env_vtophys 00:08:16.523 ************************************ 00:08:16.523 16:23:58 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:16.781 EAL: lib.eal log level changed from notice to debug 00:08:16.781 EAL: Detected lcore 0 as core 0 on socket 0 00:08:16.781 EAL: Detected lcore 1 as core 0 on socket 0 00:08:16.781 EAL: Detected lcore 2 as core 0 on socket 0 00:08:16.781 EAL: Detected lcore 3 as core 0 on socket 0 00:08:16.781 EAL: Detected lcore 4 as core 0 on socket 0 00:08:16.781 EAL: Detected lcore 5 as core 0 on socket 0 00:08:16.781 EAL: Detected lcore 6 as core 0 on socket 0 00:08:16.781 EAL: Detected lcore 7 as core 0 on socket 0 00:08:16.781 EAL: Detected lcore 8 as core 0 on socket 0 00:08:16.781 EAL: Detected lcore 9 as core 0 on socket 0 00:08:16.781 EAL: Maximum logical cores by configuration: 128 00:08:16.781 EAL: Detected CPU lcores: 10 00:08:16.781 EAL: Detected NUMA nodes: 1 00:08:16.781 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:08:16.781 EAL: Detected shared linkage of DPDK 00:08:16.781 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:08:16.781 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:08:16.781 EAL: Registered [vdev] bus. 00:08:16.781 EAL: bus.vdev log level changed from disabled to notice 00:08:16.781 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:08:16.782 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:08:16.782 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:08:16.782 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:08:16.782 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:08:16.782 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:08:16.782 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:08:16.782 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:08:16.782 EAL: No shared files mode enabled, IPC will be disabled 00:08:16.782 EAL: No shared files mode enabled, IPC is disabled 00:08:16.782 EAL: Selected IOVA mode 'PA' 00:08:16.782 EAL: Probing VFIO support... 00:08:16.782 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:08:16.782 EAL: VFIO modules not loaded, skipping VFIO support... 00:08:16.782 EAL: Ask a virtual area of 0x2e000 bytes 00:08:16.782 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:16.782 EAL: Setting up physically contiguous memory... 00:08:16.782 EAL: Setting maximum number of open files to 524288 00:08:16.782 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:16.782 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:16.782 EAL: Ask a virtual area of 0x61000 bytes 00:08:16.782 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:16.782 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:16.782 EAL: Ask a virtual area of 0x400000000 bytes 00:08:16.782 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:16.782 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:16.782 EAL: Ask a virtual area of 0x61000 bytes 00:08:16.782 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:16.782 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:16.782 EAL: Ask a virtual area of 0x400000000 bytes 00:08:16.782 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:16.782 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:16.782 EAL: Ask a virtual area of 0x61000 bytes 00:08:16.782 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:16.782 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:16.782 EAL: Ask a virtual area of 0x400000000 bytes 00:08:16.782 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:16.782 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:16.782 EAL: Ask a virtual area of 0x61000 bytes 00:08:16.782 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:16.782 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:16.782 EAL: Ask a virtual area of 0x400000000 bytes 00:08:16.782 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:16.782 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:16.782 EAL: Hugepages will be freed exactly as allocated. 00:08:16.782 EAL: No shared files mode enabled, IPC is disabled 00:08:16.782 EAL: No shared files mode enabled, IPC is disabled 00:08:16.782 EAL: TSC frequency is ~2290000 KHz 00:08:16.782 EAL: Main lcore 0 is ready (tid=7f8d50653a40;cpuset=[0]) 00:08:16.782 EAL: Trying to obtain current memory policy. 00:08:16.782 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:16.782 EAL: Restoring previous memory policy: 0 00:08:16.782 EAL: request: mp_malloc_sync 00:08:16.782 EAL: No shared files mode enabled, IPC is disabled 00:08:16.782 EAL: Heap on socket 0 was expanded by 2MB 00:08:16.782 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:08:16.782 EAL: No shared files mode enabled, IPC is disabled 00:08:16.782 EAL: No PCI address specified using 'addr=' in: bus=pci 00:08:16.782 EAL: Mem event callback 'spdk:(nil)' registered 00:08:16.782 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:08:16.782 00:08:16.782 00:08:16.782 CUnit - A unit testing framework for C - Version 2.1-3 00:08:16.782 http://cunit.sourceforge.net/ 00:08:16.782 00:08:16.782 00:08:16.782 Suite: components_suite 00:08:17.348 Test: vtophys_malloc_test ...passed 00:08:17.348 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:17.348 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:17.348 EAL: Restoring previous memory policy: 4 00:08:17.348 EAL: Calling mem event callback 'spdk:(nil)' 00:08:17.348 EAL: request: mp_malloc_sync 00:08:17.348 EAL: No shared files mode enabled, IPC is disabled 00:08:17.348 EAL: Heap on socket 0 was expanded by 4MB 00:08:17.348 EAL: Calling mem event callback 'spdk:(nil)' 00:08:17.348 EAL: request: mp_malloc_sync 00:08:17.348 EAL: No shared files mode enabled, IPC is disabled 00:08:17.348 EAL: Heap on socket 0 was shrunk by 4MB 00:08:17.348 EAL: Trying to obtain current memory policy. 00:08:17.348 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:17.348 EAL: Restoring previous memory policy: 4 00:08:17.348 EAL: Calling mem event callback 'spdk:(nil)' 00:08:17.348 EAL: request: mp_malloc_sync 00:08:17.348 EAL: No shared files mode enabled, IPC is disabled 00:08:17.348 EAL: Heap on socket 0 was expanded by 6MB 00:08:17.348 EAL: Calling mem event callback 'spdk:(nil)' 00:08:17.348 EAL: request: mp_malloc_sync 00:08:17.348 EAL: No shared files mode enabled, IPC is disabled 00:08:17.348 EAL: Heap on socket 0 was shrunk by 6MB 00:08:17.348 EAL: Trying to obtain current memory policy. 00:08:17.348 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:17.348 EAL: Restoring previous memory policy: 4 00:08:17.348 EAL: Calling mem event callback 'spdk:(nil)' 00:08:17.348 EAL: request: mp_malloc_sync 00:08:17.348 EAL: No shared files mode enabled, IPC is disabled 00:08:17.348 EAL: Heap on socket 0 was expanded by 10MB 00:08:17.348 EAL: Calling mem event callback 'spdk:(nil)' 00:08:17.348 EAL: request: mp_malloc_sync 00:08:17.348 EAL: No shared files mode enabled, IPC is disabled 00:08:17.348 EAL: Heap on socket 0 was shrunk by 10MB 00:08:17.348 EAL: Trying to obtain current memory policy. 00:08:17.348 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:17.348 EAL: Restoring previous memory policy: 4 00:08:17.348 EAL: Calling mem event callback 'spdk:(nil)' 00:08:17.348 EAL: request: mp_malloc_sync 00:08:17.348 EAL: No shared files mode enabled, IPC is disabled 00:08:17.348 EAL: Heap on socket 0 was expanded by 18MB 00:08:17.348 EAL: Calling mem event callback 'spdk:(nil)' 00:08:17.348 EAL: request: mp_malloc_sync 00:08:17.348 EAL: No shared files mode enabled, IPC is disabled 00:08:17.348 EAL: Heap on socket 0 was shrunk by 18MB 00:08:17.348 EAL: Trying to obtain current memory policy. 00:08:17.348 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:17.348 EAL: Restoring previous memory policy: 4 00:08:17.348 EAL: Calling mem event callback 'spdk:(nil)' 00:08:17.348 EAL: request: mp_malloc_sync 00:08:17.348 EAL: No shared files mode enabled, IPC is disabled 00:08:17.348 EAL: Heap on socket 0 was expanded by 34MB 00:08:17.348 EAL: Calling mem event callback 'spdk:(nil)' 00:08:17.348 EAL: request: mp_malloc_sync 00:08:17.348 EAL: No shared files mode enabled, IPC is disabled 00:08:17.348 EAL: Heap on socket 0 was shrunk by 34MB 00:08:17.348 EAL: Trying to obtain current memory policy. 00:08:17.348 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:17.348 EAL: Restoring previous memory policy: 4 00:08:17.348 EAL: Calling mem event callback 'spdk:(nil)' 00:08:17.348 EAL: request: mp_malloc_sync 00:08:17.348 EAL: No shared files mode enabled, IPC is disabled 00:08:17.348 EAL: Heap on socket 0 was expanded by 66MB 00:08:17.348 EAL: Calling mem event callback 'spdk:(nil)' 00:08:17.348 EAL: request: mp_malloc_sync 00:08:17.348 EAL: No shared files mode enabled, IPC is disabled 00:08:17.348 EAL: Heap on socket 0 was shrunk by 66MB 00:08:17.348 EAL: Trying to obtain current memory policy. 00:08:17.348 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:17.348 EAL: Restoring previous memory policy: 4 00:08:17.348 EAL: Calling mem event callback 'spdk:(nil)' 00:08:17.348 EAL: request: mp_malloc_sync 00:08:17.348 EAL: No shared files mode enabled, IPC is disabled 00:08:17.348 EAL: Heap on socket 0 was expanded by 130MB 00:08:17.348 EAL: Calling mem event callback 'spdk:(nil)' 00:08:17.348 EAL: request: mp_malloc_sync 00:08:17.348 EAL: No shared files mode enabled, IPC is disabled 00:08:17.348 EAL: Heap on socket 0 was shrunk by 130MB 00:08:17.348 EAL: Trying to obtain current memory policy. 00:08:17.348 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:17.348 EAL: Restoring previous memory policy: 4 00:08:17.348 EAL: Calling mem event callback 'spdk:(nil)' 00:08:17.348 EAL: request: mp_malloc_sync 00:08:17.348 EAL: No shared files mode enabled, IPC is disabled 00:08:17.348 EAL: Heap on socket 0 was expanded by 258MB 00:08:17.348 EAL: Calling mem event callback 'spdk:(nil)' 00:08:17.607 EAL: request: mp_malloc_sync 00:08:17.607 EAL: No shared files mode enabled, IPC is disabled 00:08:17.607 EAL: Heap on socket 0 was shrunk by 258MB 00:08:17.607 EAL: Trying to obtain current memory policy. 00:08:17.607 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:17.607 EAL: Restoring previous memory policy: 4 00:08:17.607 EAL: Calling mem event callback 'spdk:(nil)' 00:08:17.607 EAL: request: mp_malloc_sync 00:08:17.607 EAL: No shared files mode enabled, IPC is disabled 00:08:17.607 EAL: Heap on socket 0 was expanded by 514MB 00:08:17.607 EAL: Calling mem event callback 'spdk:(nil)' 00:08:17.865 EAL: request: mp_malloc_sync 00:08:17.865 EAL: No shared files mode enabled, IPC is disabled 00:08:17.865 EAL: Heap on socket 0 was shrunk by 514MB 00:08:17.865 EAL: Trying to obtain current memory policy. 00:08:17.865 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:18.125 EAL: Restoring previous memory policy: 4 00:08:18.125 EAL: Calling mem event callback 'spdk:(nil)' 00:08:18.125 EAL: request: mp_malloc_sync 00:08:18.125 EAL: No shared files mode enabled, IPC is disabled 00:08:18.125 EAL: Heap on socket 0 was expanded by 1026MB 00:08:18.125 EAL: Calling mem event callback 'spdk:(nil)' 00:08:18.384 passed 00:08:18.384 00:08:18.384 Run Summary: Type Total Ran Passed Failed Inactive 00:08:18.384 suites 1 1 n/a 0 0 00:08:18.384 tests 2 2 2 0 0 00:08:18.384 asserts 5694 5694 5694 0 n/a 00:08:18.384 00:08:18.384 Elapsed time = 1.464 seconds 00:08:18.384 EAL: request: mp_malloc_sync 00:08:18.384 EAL: No shared files mode enabled, IPC is disabled 00:08:18.384 EAL: Heap on socket 0 was shrunk by 1026MB 00:08:18.384 EAL: Calling mem event callback 'spdk:(nil)' 00:08:18.384 EAL: request: mp_malloc_sync 00:08:18.384 EAL: No shared files mode enabled, IPC is disabled 00:08:18.384 EAL: Heap on socket 0 was shrunk by 2MB 00:08:18.384 EAL: No shared files mode enabled, IPC is disabled 00:08:18.384 EAL: No shared files mode enabled, IPC is disabled 00:08:18.384 EAL: No shared files mode enabled, IPC is disabled 00:08:18.384 00:08:18.384 real 0m1.763s 00:08:18.384 user 0m0.811s 00:08:18.384 sys 0m0.802s 00:08:18.384 ************************************ 00:08:18.384 END TEST env_vtophys 00:08:18.384 ************************************ 00:08:18.384 16:24:00 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:18.384 16:24:00 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:08:18.384 16:24:00 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:18.384 16:24:00 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:18.384 16:24:00 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.384 16:24:00 env -- common/autotest_common.sh@10 -- # set +x 00:08:18.384 ************************************ 00:08:18.384 START TEST env_pci 00:08:18.384 ************************************ 00:08:18.384 16:24:00 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:18.384 00:08:18.384 00:08:18.384 CUnit - A unit testing framework for C - Version 2.1-3 00:08:18.384 http://cunit.sourceforge.net/ 00:08:18.384 00:08:18.384 00:08:18.384 Suite: pci 00:08:18.384 Test: pci_hook ...[2024-12-06 16:24:00.175338] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 69335 has claimed it 00:08:18.384 passed 00:08:18.384 00:08:18.384 Run Summary: Type Total Ran Passed Failed Inactive 00:08:18.384 suites 1 1 n/a 0 0 00:08:18.384 tests 1 1 1 0 0 00:08:18.384 asserts 25 25 25 0 n/a 00:08:18.384 00:08:18.384 Elapsed time = 0.006 seconds 00:08:18.384 EAL: Cannot find device (10000:00:01.0) 00:08:18.384 EAL: Failed to attach device on primary process 00:08:18.645 00:08:18.645 real 0m0.091s 00:08:18.645 user 0m0.038s 00:08:18.645 sys 0m0.052s 00:08:18.645 16:24:00 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:18.645 16:24:00 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:08:18.645 ************************************ 00:08:18.645 END TEST env_pci 00:08:18.645 ************************************ 00:08:18.645 16:24:00 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:18.645 16:24:00 env -- env/env.sh@15 -- # uname 00:08:18.645 16:24:00 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:18.645 16:24:00 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:18.645 16:24:00 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:18.645 16:24:00 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:18.645 16:24:00 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.645 16:24:00 env -- common/autotest_common.sh@10 -- # set +x 00:08:18.645 ************************************ 00:08:18.645 START TEST env_dpdk_post_init 00:08:18.645 ************************************ 00:08:18.645 16:24:00 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:18.645 EAL: Detected CPU lcores: 10 00:08:18.645 EAL: Detected NUMA nodes: 1 00:08:18.645 EAL: Detected shared linkage of DPDK 00:08:18.645 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:18.645 EAL: Selected IOVA mode 'PA' 00:08:18.907 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:18.907 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:08:18.907 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:08:18.907 Starting DPDK initialization... 00:08:18.907 Starting SPDK post initialization... 00:08:18.907 SPDK NVMe probe 00:08:18.907 Attaching to 0000:00:10.0 00:08:18.907 Attaching to 0000:00:11.0 00:08:18.907 Attached to 0000:00:10.0 00:08:18.907 Attached to 0000:00:11.0 00:08:18.907 Cleaning up... 00:08:18.907 00:08:18.907 real 0m0.273s 00:08:18.907 user 0m0.091s 00:08:18.907 sys 0m0.082s 00:08:18.907 16:24:00 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:18.907 16:24:00 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:08:18.907 ************************************ 00:08:18.907 END TEST env_dpdk_post_init 00:08:18.907 ************************************ 00:08:18.907 16:24:00 env -- env/env.sh@26 -- # uname 00:08:18.907 16:24:00 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:18.907 16:24:00 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:18.907 16:24:00 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:18.907 16:24:00 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.907 16:24:00 env -- common/autotest_common.sh@10 -- # set +x 00:08:18.907 ************************************ 00:08:18.907 START TEST env_mem_callbacks 00:08:18.907 ************************************ 00:08:18.907 16:24:00 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:18.907 EAL: Detected CPU lcores: 10 00:08:18.907 EAL: Detected NUMA nodes: 1 00:08:18.907 EAL: Detected shared linkage of DPDK 00:08:18.907 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:18.907 EAL: Selected IOVA mode 'PA' 00:08:19.165 00:08:19.165 00:08:19.165 CUnit - A unit testing framework for C - Version 2.1-3 00:08:19.165 http://cunit.sourceforge.net/ 00:08:19.165 00:08:19.165 00:08:19.165 Suite: memory 00:08:19.165 Test: test ... 00:08:19.165 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:19.165 register 0x200000200000 2097152 00:08:19.165 malloc 3145728 00:08:19.165 register 0x200000400000 4194304 00:08:19.165 buf 0x200000500000 len 3145728 PASSED 00:08:19.165 malloc 64 00:08:19.165 buf 0x2000004fff40 len 64 PASSED 00:08:19.165 malloc 4194304 00:08:19.165 register 0x200000800000 6291456 00:08:19.165 buf 0x200000a00000 len 4194304 PASSED 00:08:19.165 free 0x200000500000 3145728 00:08:19.165 free 0x2000004fff40 64 00:08:19.165 unregister 0x200000400000 4194304 PASSED 00:08:19.165 free 0x200000a00000 4194304 00:08:19.165 unregister 0x200000800000 6291456 PASSED 00:08:19.165 malloc 8388608 00:08:19.165 register 0x200000400000 10485760 00:08:19.166 buf 0x200000600000 len 8388608 PASSED 00:08:19.166 free 0x200000600000 8388608 00:08:19.166 unregister 0x200000400000 10485760 PASSED 00:08:19.166 passed 00:08:19.166 00:08:19.166 Run Summary: Type Total Ran Passed Failed Inactive 00:08:19.166 suites 1 1 n/a 0 0 00:08:19.166 tests 1 1 1 0 0 00:08:19.166 asserts 15 15 15 0 n/a 00:08:19.166 00:08:19.166 Elapsed time = 0.013 seconds 00:08:19.166 00:08:19.166 real 0m0.210s 00:08:19.166 user 0m0.044s 00:08:19.166 sys 0m0.064s 00:08:19.166 16:24:00 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.166 16:24:00 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:08:19.166 ************************************ 00:08:19.166 END TEST env_mem_callbacks 00:08:19.166 ************************************ 00:08:19.166 00:08:19.166 real 0m3.188s 00:08:19.166 user 0m1.487s 00:08:19.166 sys 0m1.353s 00:08:19.166 16:24:00 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.166 16:24:00 env -- common/autotest_common.sh@10 -- # set +x 00:08:19.166 ************************************ 00:08:19.166 END TEST env 00:08:19.166 ************************************ 00:08:19.166 16:24:00 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:19.166 16:24:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:19.166 16:24:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.166 16:24:00 -- common/autotest_common.sh@10 -- # set +x 00:08:19.166 ************************************ 00:08:19.166 START TEST rpc 00:08:19.166 ************************************ 00:08:19.166 16:24:00 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:19.425 * Looking for test storage... 00:08:19.425 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:08:19.425 16:24:01 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:19.425 16:24:01 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:08:19.425 16:24:01 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:19.425 16:24:01 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:19.425 16:24:01 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:19.425 16:24:01 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:19.425 16:24:01 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:19.425 16:24:01 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:19.425 16:24:01 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:19.425 16:24:01 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:19.425 16:24:01 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:19.425 16:24:01 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:19.425 16:24:01 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:19.425 16:24:01 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:19.425 16:24:01 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:19.425 16:24:01 rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:19.425 16:24:01 rpc -- scripts/common.sh@345 -- # : 1 00:08:19.425 16:24:01 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:19.425 16:24:01 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:19.425 16:24:01 rpc -- scripts/common.sh@365 -- # decimal 1 00:08:19.425 16:24:01 rpc -- scripts/common.sh@353 -- # local d=1 00:08:19.425 16:24:01 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:19.425 16:24:01 rpc -- scripts/common.sh@355 -- # echo 1 00:08:19.425 16:24:01 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:19.425 16:24:01 rpc -- scripts/common.sh@366 -- # decimal 2 00:08:19.425 16:24:01 rpc -- scripts/common.sh@353 -- # local d=2 00:08:19.425 16:24:01 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:19.425 16:24:01 rpc -- scripts/common.sh@355 -- # echo 2 00:08:19.425 16:24:01 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:19.425 16:24:01 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:19.425 16:24:01 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:19.425 16:24:01 rpc -- scripts/common.sh@368 -- # return 0 00:08:19.425 16:24:01 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:19.425 16:24:01 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:19.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.425 --rc genhtml_branch_coverage=1 00:08:19.425 --rc genhtml_function_coverage=1 00:08:19.425 --rc genhtml_legend=1 00:08:19.425 --rc geninfo_all_blocks=1 00:08:19.425 --rc geninfo_unexecuted_blocks=1 00:08:19.425 00:08:19.425 ' 00:08:19.426 16:24:01 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:19.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.426 --rc genhtml_branch_coverage=1 00:08:19.426 --rc genhtml_function_coverage=1 00:08:19.426 --rc genhtml_legend=1 00:08:19.426 --rc geninfo_all_blocks=1 00:08:19.426 --rc geninfo_unexecuted_blocks=1 00:08:19.426 00:08:19.426 ' 00:08:19.426 16:24:01 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:19.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.426 --rc genhtml_branch_coverage=1 00:08:19.426 --rc genhtml_function_coverage=1 00:08:19.426 --rc genhtml_legend=1 00:08:19.426 --rc geninfo_all_blocks=1 00:08:19.426 --rc geninfo_unexecuted_blocks=1 00:08:19.426 00:08:19.426 ' 00:08:19.426 16:24:01 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:19.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.426 --rc genhtml_branch_coverage=1 00:08:19.426 --rc genhtml_function_coverage=1 00:08:19.426 --rc genhtml_legend=1 00:08:19.426 --rc geninfo_all_blocks=1 00:08:19.426 --rc geninfo_unexecuted_blocks=1 00:08:19.426 00:08:19.426 ' 00:08:19.426 16:24:01 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69462 00:08:19.426 16:24:01 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:08:19.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.426 16:24:01 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:19.426 16:24:01 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69462 00:08:19.426 16:24:01 rpc -- common/autotest_common.sh@835 -- # '[' -z 69462 ']' 00:08:19.426 16:24:01 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.426 16:24:01 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:19.426 16:24:01 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.426 16:24:01 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:19.426 16:24:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:19.684 [2024-12-06 16:24:01.310101] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:08:19.684 [2024-12-06 16:24:01.310297] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69462 ] 00:08:19.685 [2024-12-06 16:24:01.489730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.685 [2024-12-06 16:24:01.520664] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:19.685 [2024-12-06 16:24:01.520734] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69462' to capture a snapshot of events at runtime. 00:08:19.685 [2024-12-06 16:24:01.520759] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:19.685 [2024-12-06 16:24:01.520769] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:19.685 [2024-12-06 16:24:01.520798] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69462 for offline analysis/debug. 00:08:19.685 [2024-12-06 16:24:01.521373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.620 16:24:02 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:20.620 16:24:02 rpc -- common/autotest_common.sh@868 -- # return 0 00:08:20.620 16:24:02 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:20.620 16:24:02 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:20.620 16:24:02 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:20.620 16:24:02 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:20.620 16:24:02 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:20.620 16:24:02 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.620 16:24:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:20.620 ************************************ 00:08:20.620 START TEST rpc_integrity 00:08:20.620 ************************************ 00:08:20.620 16:24:02 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:08:20.620 16:24:02 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:20.620 16:24:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.620 16:24:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:20.620 16:24:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.620 16:24:02 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:20.620 16:24:02 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:20.620 16:24:02 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:20.620 16:24:02 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:20.620 16:24:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.620 16:24:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:20.620 16:24:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.620 16:24:02 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:08:20.620 16:24:02 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:20.620 16:24:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.620 16:24:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:20.620 16:24:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.620 16:24:02 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:20.620 { 00:08:20.620 "name": "Malloc0", 00:08:20.620 "aliases": [ 00:08:20.620 "67d3e110-ed79-42ea-a88c-8128e4b6ab2d" 00:08:20.620 ], 00:08:20.620 "product_name": "Malloc disk", 00:08:20.620 "block_size": 512, 00:08:20.620 "num_blocks": 16384, 00:08:20.620 "uuid": "67d3e110-ed79-42ea-a88c-8128e4b6ab2d", 00:08:20.620 "assigned_rate_limits": { 00:08:20.620 "rw_ios_per_sec": 0, 00:08:20.620 "rw_mbytes_per_sec": 0, 00:08:20.620 "r_mbytes_per_sec": 0, 00:08:20.620 "w_mbytes_per_sec": 0 00:08:20.620 }, 00:08:20.620 "claimed": false, 00:08:20.620 "zoned": false, 00:08:20.620 "supported_io_types": { 00:08:20.620 "read": true, 00:08:20.620 "write": true, 00:08:20.620 "unmap": true, 00:08:20.620 "flush": true, 00:08:20.620 "reset": true, 00:08:20.620 "nvme_admin": false, 00:08:20.620 "nvme_io": false, 00:08:20.620 "nvme_io_md": false, 00:08:20.620 "write_zeroes": true, 00:08:20.620 "zcopy": true, 00:08:20.620 "get_zone_info": false, 00:08:20.620 "zone_management": false, 00:08:20.620 "zone_append": false, 00:08:20.620 "compare": false, 00:08:20.620 "compare_and_write": false, 00:08:20.620 "abort": true, 00:08:20.620 "seek_hole": false, 00:08:20.620 "seek_data": false, 00:08:20.620 "copy": true, 00:08:20.620 "nvme_iov_md": false 00:08:20.620 }, 00:08:20.620 "memory_domains": [ 00:08:20.620 { 00:08:20.620 "dma_device_id": "system", 00:08:20.620 "dma_device_type": 1 00:08:20.620 }, 00:08:20.620 { 00:08:20.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.621 "dma_device_type": 2 00:08:20.621 } 00:08:20.621 ], 00:08:20.621 "driver_specific": {} 00:08:20.621 } 00:08:20.621 ]' 00:08:20.621 16:24:02 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:20.621 16:24:02 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:20.621 16:24:02 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:08:20.621 16:24:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.621 16:24:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:20.621 [2024-12-06 16:24:02.320799] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:08:20.621 [2024-12-06 16:24:02.320929] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:20.621 [2024-12-06 16:24:02.320967] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:20.621 [2024-12-06 16:24:02.320986] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:20.621 [2024-12-06 16:24:02.323561] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:20.621 [2024-12-06 16:24:02.323603] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:20.621 Passthru0 00:08:20.621 16:24:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.621 16:24:02 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:20.621 16:24:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.621 16:24:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:20.621 16:24:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.621 16:24:02 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:20.621 { 00:08:20.621 "name": "Malloc0", 00:08:20.621 "aliases": [ 00:08:20.621 "67d3e110-ed79-42ea-a88c-8128e4b6ab2d" 00:08:20.621 ], 00:08:20.621 "product_name": "Malloc disk", 00:08:20.621 "block_size": 512, 00:08:20.621 "num_blocks": 16384, 00:08:20.621 "uuid": "67d3e110-ed79-42ea-a88c-8128e4b6ab2d", 00:08:20.621 "assigned_rate_limits": { 00:08:20.621 "rw_ios_per_sec": 0, 00:08:20.621 "rw_mbytes_per_sec": 0, 00:08:20.621 "r_mbytes_per_sec": 0, 00:08:20.621 "w_mbytes_per_sec": 0 00:08:20.621 }, 00:08:20.621 "claimed": true, 00:08:20.621 "claim_type": "exclusive_write", 00:08:20.621 "zoned": false, 00:08:20.621 "supported_io_types": { 00:08:20.621 "read": true, 00:08:20.621 "write": true, 00:08:20.621 "unmap": true, 00:08:20.621 "flush": true, 00:08:20.621 "reset": true, 00:08:20.621 "nvme_admin": false, 00:08:20.621 "nvme_io": false, 00:08:20.621 "nvme_io_md": false, 00:08:20.621 "write_zeroes": true, 00:08:20.621 "zcopy": true, 00:08:20.621 "get_zone_info": false, 00:08:20.621 "zone_management": false, 00:08:20.621 "zone_append": false, 00:08:20.621 "compare": false, 00:08:20.621 "compare_and_write": false, 00:08:20.621 "abort": true, 00:08:20.621 "seek_hole": false, 00:08:20.621 "seek_data": false, 00:08:20.621 "copy": true, 00:08:20.621 "nvme_iov_md": false 00:08:20.621 }, 00:08:20.621 "memory_domains": [ 00:08:20.621 { 00:08:20.621 "dma_device_id": "system", 00:08:20.621 "dma_device_type": 1 00:08:20.621 }, 00:08:20.621 { 00:08:20.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.621 "dma_device_type": 2 00:08:20.621 } 00:08:20.621 ], 00:08:20.621 "driver_specific": {} 00:08:20.621 }, 00:08:20.621 { 00:08:20.621 "name": "Passthru0", 00:08:20.621 "aliases": [ 00:08:20.621 "ef6260f4-1868-5b00-bf46-81dee6620460" 00:08:20.621 ], 00:08:20.621 "product_name": "passthru", 00:08:20.621 "block_size": 512, 00:08:20.621 "num_blocks": 16384, 00:08:20.621 "uuid": "ef6260f4-1868-5b00-bf46-81dee6620460", 00:08:20.621 "assigned_rate_limits": { 00:08:20.621 "rw_ios_per_sec": 0, 00:08:20.621 "rw_mbytes_per_sec": 0, 00:08:20.621 "r_mbytes_per_sec": 0, 00:08:20.621 "w_mbytes_per_sec": 0 00:08:20.621 }, 00:08:20.621 "claimed": false, 00:08:20.621 "zoned": false, 00:08:20.621 "supported_io_types": { 00:08:20.621 "read": true, 00:08:20.621 "write": true, 00:08:20.621 "unmap": true, 00:08:20.621 "flush": true, 00:08:20.621 "reset": true, 00:08:20.621 "nvme_admin": false, 00:08:20.621 "nvme_io": false, 00:08:20.621 "nvme_io_md": false, 00:08:20.621 "write_zeroes": true, 00:08:20.621 "zcopy": true, 00:08:20.621 "get_zone_info": false, 00:08:20.621 "zone_management": false, 00:08:20.621 "zone_append": false, 00:08:20.621 "compare": false, 00:08:20.621 "compare_and_write": false, 00:08:20.621 "abort": true, 00:08:20.621 "seek_hole": false, 00:08:20.621 "seek_data": false, 00:08:20.621 "copy": true, 00:08:20.621 "nvme_iov_md": false 00:08:20.621 }, 00:08:20.621 "memory_domains": [ 00:08:20.621 { 00:08:20.621 "dma_device_id": "system", 00:08:20.621 "dma_device_type": 1 00:08:20.621 }, 00:08:20.621 { 00:08:20.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.621 "dma_device_type": 2 00:08:20.621 } 00:08:20.621 ], 00:08:20.621 "driver_specific": { 00:08:20.621 "passthru": { 00:08:20.621 "name": "Passthru0", 00:08:20.621 "base_bdev_name": "Malloc0" 00:08:20.621 } 00:08:20.621 } 00:08:20.621 } 00:08:20.621 ]' 00:08:20.621 16:24:02 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:20.621 16:24:02 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:20.621 16:24:02 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:20.621 16:24:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.621 16:24:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:20.621 16:24:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.621 16:24:02 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:08:20.621 16:24:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.621 16:24:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:20.621 16:24:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.621 16:24:02 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:20.621 16:24:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.621 16:24:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:20.621 16:24:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.621 16:24:02 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:20.621 16:24:02 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:20.880 ************************************ 00:08:20.880 END TEST rpc_integrity 00:08:20.880 ************************************ 00:08:20.880 16:24:02 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:20.880 00:08:20.880 real 0m0.326s 00:08:20.880 user 0m0.188s 00:08:20.880 sys 0m0.062s 00:08:20.880 16:24:02 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.880 16:24:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:20.880 16:24:02 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:08:20.880 16:24:02 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:20.880 16:24:02 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.880 16:24:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:20.880 ************************************ 00:08:20.880 START TEST rpc_plugins 00:08:20.880 ************************************ 00:08:20.880 16:24:02 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:08:20.880 16:24:02 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:08:20.880 16:24:02 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.880 16:24:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:20.880 16:24:02 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.880 16:24:02 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:08:20.880 16:24:02 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:08:20.880 16:24:02 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.880 16:24:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:20.880 16:24:02 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.880 16:24:02 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:08:20.880 { 00:08:20.880 "name": "Malloc1", 00:08:20.880 "aliases": [ 00:08:20.880 "f05466b6-470d-4486-a25e-f34534a13f41" 00:08:20.880 ], 00:08:20.880 "product_name": "Malloc disk", 00:08:20.880 "block_size": 4096, 00:08:20.880 "num_blocks": 256, 00:08:20.880 "uuid": "f05466b6-470d-4486-a25e-f34534a13f41", 00:08:20.880 "assigned_rate_limits": { 00:08:20.880 "rw_ios_per_sec": 0, 00:08:20.880 "rw_mbytes_per_sec": 0, 00:08:20.880 "r_mbytes_per_sec": 0, 00:08:20.880 "w_mbytes_per_sec": 0 00:08:20.880 }, 00:08:20.880 "claimed": false, 00:08:20.880 "zoned": false, 00:08:20.880 "supported_io_types": { 00:08:20.880 "read": true, 00:08:20.880 "write": true, 00:08:20.880 "unmap": true, 00:08:20.880 "flush": true, 00:08:20.880 "reset": true, 00:08:20.880 "nvme_admin": false, 00:08:20.880 "nvme_io": false, 00:08:20.880 "nvme_io_md": false, 00:08:20.880 "write_zeroes": true, 00:08:20.880 "zcopy": true, 00:08:20.880 "get_zone_info": false, 00:08:20.880 "zone_management": false, 00:08:20.880 "zone_append": false, 00:08:20.880 "compare": false, 00:08:20.880 "compare_and_write": false, 00:08:20.880 "abort": true, 00:08:20.880 "seek_hole": false, 00:08:20.880 "seek_data": false, 00:08:20.880 "copy": true, 00:08:20.880 "nvme_iov_md": false 00:08:20.880 }, 00:08:20.880 "memory_domains": [ 00:08:20.880 { 00:08:20.880 "dma_device_id": "system", 00:08:20.880 "dma_device_type": 1 00:08:20.880 }, 00:08:20.880 { 00:08:20.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.880 "dma_device_type": 2 00:08:20.880 } 00:08:20.880 ], 00:08:20.880 "driver_specific": {} 00:08:20.880 } 00:08:20.880 ]' 00:08:20.880 16:24:02 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:08:20.880 16:24:02 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:08:20.880 16:24:02 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:08:20.880 16:24:02 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.880 16:24:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:20.880 16:24:02 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.880 16:24:02 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:08:20.880 16:24:02 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.880 16:24:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:20.880 16:24:02 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.880 16:24:02 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:08:20.880 16:24:02 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:08:21.140 ************************************ 00:08:21.140 END TEST rpc_plugins 00:08:21.140 ************************************ 00:08:21.140 16:24:02 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:08:21.140 00:08:21.140 real 0m0.164s 00:08:21.140 user 0m0.101s 00:08:21.140 sys 0m0.024s 00:08:21.140 16:24:02 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.140 16:24:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:21.140 16:24:02 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:08:21.140 16:24:02 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:21.140 16:24:02 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.140 16:24:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.140 ************************************ 00:08:21.140 START TEST rpc_trace_cmd_test 00:08:21.140 ************************************ 00:08:21.140 16:24:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:08:21.140 16:24:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:08:21.140 16:24:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:08:21.140 16:24:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.140 16:24:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.140 16:24:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.140 16:24:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:08:21.140 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69462", 00:08:21.140 "tpoint_group_mask": "0x8", 00:08:21.140 "iscsi_conn": { 00:08:21.140 "mask": "0x2", 00:08:21.140 "tpoint_mask": "0x0" 00:08:21.140 }, 00:08:21.140 "scsi": { 00:08:21.140 "mask": "0x4", 00:08:21.140 "tpoint_mask": "0x0" 00:08:21.140 }, 00:08:21.140 "bdev": { 00:08:21.140 "mask": "0x8", 00:08:21.140 "tpoint_mask": "0xffffffffffffffff" 00:08:21.140 }, 00:08:21.140 "nvmf_rdma": { 00:08:21.140 "mask": "0x10", 00:08:21.140 "tpoint_mask": "0x0" 00:08:21.140 }, 00:08:21.140 "nvmf_tcp": { 00:08:21.140 "mask": "0x20", 00:08:21.140 "tpoint_mask": "0x0" 00:08:21.140 }, 00:08:21.140 "ftl": { 00:08:21.140 "mask": "0x40", 00:08:21.140 "tpoint_mask": "0x0" 00:08:21.140 }, 00:08:21.140 "blobfs": { 00:08:21.140 "mask": "0x80", 00:08:21.140 "tpoint_mask": "0x0" 00:08:21.140 }, 00:08:21.140 "dsa": { 00:08:21.140 "mask": "0x200", 00:08:21.140 "tpoint_mask": "0x0" 00:08:21.140 }, 00:08:21.140 "thread": { 00:08:21.140 "mask": "0x400", 00:08:21.140 "tpoint_mask": "0x0" 00:08:21.140 }, 00:08:21.140 "nvme_pcie": { 00:08:21.140 "mask": "0x800", 00:08:21.140 "tpoint_mask": "0x0" 00:08:21.140 }, 00:08:21.140 "iaa": { 00:08:21.140 "mask": "0x1000", 00:08:21.140 "tpoint_mask": "0x0" 00:08:21.140 }, 00:08:21.140 "nvme_tcp": { 00:08:21.140 "mask": "0x2000", 00:08:21.140 "tpoint_mask": "0x0" 00:08:21.140 }, 00:08:21.140 "bdev_nvme": { 00:08:21.140 "mask": "0x4000", 00:08:21.140 "tpoint_mask": "0x0" 00:08:21.140 }, 00:08:21.140 "sock": { 00:08:21.140 "mask": "0x8000", 00:08:21.140 "tpoint_mask": "0x0" 00:08:21.140 }, 00:08:21.140 "blob": { 00:08:21.140 "mask": "0x10000", 00:08:21.140 "tpoint_mask": "0x0" 00:08:21.140 }, 00:08:21.140 "bdev_raid": { 00:08:21.140 "mask": "0x20000", 00:08:21.140 "tpoint_mask": "0x0" 00:08:21.140 }, 00:08:21.140 "scheduler": { 00:08:21.140 "mask": "0x40000", 00:08:21.140 "tpoint_mask": "0x0" 00:08:21.140 } 00:08:21.140 }' 00:08:21.140 16:24:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:08:21.140 16:24:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:08:21.140 16:24:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:08:21.140 16:24:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:08:21.140 16:24:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:08:21.140 16:24:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:08:21.140 16:24:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:08:21.400 16:24:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:08:21.400 16:24:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:08:21.400 ************************************ 00:08:21.400 END TEST rpc_trace_cmd_test 00:08:21.400 ************************************ 00:08:21.400 16:24:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:08:21.400 00:08:21.400 real 0m0.235s 00:08:21.400 user 0m0.194s 00:08:21.400 sys 0m0.033s 00:08:21.400 16:24:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.400 16:24:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.400 16:24:03 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:08:21.400 16:24:03 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:08:21.400 16:24:03 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:08:21.400 16:24:03 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:21.400 16:24:03 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.400 16:24:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.400 ************************************ 00:08:21.400 START TEST rpc_daemon_integrity 00:08:21.400 ************************************ 00:08:21.400 16:24:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:08:21.400 16:24:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:21.400 16:24:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.400 16:24:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:21.400 16:24:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.400 16:24:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:21.400 16:24:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:21.400 16:24:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:21.400 16:24:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:21.400 16:24:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.400 16:24:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:21.400 16:24:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.400 16:24:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:08:21.400 16:24:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:21.400 16:24:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.400 16:24:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:21.400 16:24:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.400 16:24:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:21.400 { 00:08:21.400 "name": "Malloc2", 00:08:21.400 "aliases": [ 00:08:21.400 "34c8f75e-1943-41a7-9c1e-e8d0a0feec5e" 00:08:21.400 ], 00:08:21.400 "product_name": "Malloc disk", 00:08:21.400 "block_size": 512, 00:08:21.400 "num_blocks": 16384, 00:08:21.400 "uuid": "34c8f75e-1943-41a7-9c1e-e8d0a0feec5e", 00:08:21.400 "assigned_rate_limits": { 00:08:21.400 "rw_ios_per_sec": 0, 00:08:21.400 "rw_mbytes_per_sec": 0, 00:08:21.400 "r_mbytes_per_sec": 0, 00:08:21.400 "w_mbytes_per_sec": 0 00:08:21.400 }, 00:08:21.400 "claimed": false, 00:08:21.400 "zoned": false, 00:08:21.400 "supported_io_types": { 00:08:21.400 "read": true, 00:08:21.400 "write": true, 00:08:21.400 "unmap": true, 00:08:21.400 "flush": true, 00:08:21.400 "reset": true, 00:08:21.400 "nvme_admin": false, 00:08:21.400 "nvme_io": false, 00:08:21.400 "nvme_io_md": false, 00:08:21.400 "write_zeroes": true, 00:08:21.400 "zcopy": true, 00:08:21.400 "get_zone_info": false, 00:08:21.400 "zone_management": false, 00:08:21.400 "zone_append": false, 00:08:21.400 "compare": false, 00:08:21.400 "compare_and_write": false, 00:08:21.400 "abort": true, 00:08:21.400 "seek_hole": false, 00:08:21.400 "seek_data": false, 00:08:21.400 "copy": true, 00:08:21.400 "nvme_iov_md": false 00:08:21.400 }, 00:08:21.400 "memory_domains": [ 00:08:21.400 { 00:08:21.400 "dma_device_id": "system", 00:08:21.400 "dma_device_type": 1 00:08:21.400 }, 00:08:21.400 { 00:08:21.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.400 "dma_device_type": 2 00:08:21.400 } 00:08:21.400 ], 00:08:21.400 "driver_specific": {} 00:08:21.400 } 00:08:21.400 ]' 00:08:21.400 16:24:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:21.400 16:24:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:21.400 16:24:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:08:21.400 16:24:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.400 16:24:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:21.400 [2024-12-06 16:24:03.227802] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:08:21.400 [2024-12-06 16:24:03.227930] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:21.400 [2024-12-06 16:24:03.227960] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:21.400 [2024-12-06 16:24:03.227970] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:21.400 [2024-12-06 16:24:03.230326] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:21.400 [2024-12-06 16:24:03.230365] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:21.400 Passthru0 00:08:21.400 16:24:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.400 16:24:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:21.400 16:24:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.400 16:24:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:21.660 16:24:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.660 16:24:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:21.660 { 00:08:21.660 "name": "Malloc2", 00:08:21.660 "aliases": [ 00:08:21.660 "34c8f75e-1943-41a7-9c1e-e8d0a0feec5e" 00:08:21.660 ], 00:08:21.660 "product_name": "Malloc disk", 00:08:21.660 "block_size": 512, 00:08:21.660 "num_blocks": 16384, 00:08:21.660 "uuid": "34c8f75e-1943-41a7-9c1e-e8d0a0feec5e", 00:08:21.660 "assigned_rate_limits": { 00:08:21.660 "rw_ios_per_sec": 0, 00:08:21.660 "rw_mbytes_per_sec": 0, 00:08:21.660 "r_mbytes_per_sec": 0, 00:08:21.660 "w_mbytes_per_sec": 0 00:08:21.660 }, 00:08:21.660 "claimed": true, 00:08:21.660 "claim_type": "exclusive_write", 00:08:21.660 "zoned": false, 00:08:21.660 "supported_io_types": { 00:08:21.660 "read": true, 00:08:21.660 "write": true, 00:08:21.660 "unmap": true, 00:08:21.660 "flush": true, 00:08:21.660 "reset": true, 00:08:21.660 "nvme_admin": false, 00:08:21.660 "nvme_io": false, 00:08:21.660 "nvme_io_md": false, 00:08:21.660 "write_zeroes": true, 00:08:21.660 "zcopy": true, 00:08:21.660 "get_zone_info": false, 00:08:21.660 "zone_management": false, 00:08:21.660 "zone_append": false, 00:08:21.660 "compare": false, 00:08:21.660 "compare_and_write": false, 00:08:21.660 "abort": true, 00:08:21.660 "seek_hole": false, 00:08:21.660 "seek_data": false, 00:08:21.660 "copy": true, 00:08:21.660 "nvme_iov_md": false 00:08:21.660 }, 00:08:21.660 "memory_domains": [ 00:08:21.660 { 00:08:21.660 "dma_device_id": "system", 00:08:21.660 "dma_device_type": 1 00:08:21.660 }, 00:08:21.660 { 00:08:21.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.660 "dma_device_type": 2 00:08:21.660 } 00:08:21.660 ], 00:08:21.660 "driver_specific": {} 00:08:21.660 }, 00:08:21.660 { 00:08:21.660 "name": "Passthru0", 00:08:21.660 "aliases": [ 00:08:21.660 "173a47d7-e091-5b8e-b5b8-9f9330ff1ccc" 00:08:21.660 ], 00:08:21.660 "product_name": "passthru", 00:08:21.660 "block_size": 512, 00:08:21.660 "num_blocks": 16384, 00:08:21.660 "uuid": "173a47d7-e091-5b8e-b5b8-9f9330ff1ccc", 00:08:21.660 "assigned_rate_limits": { 00:08:21.660 "rw_ios_per_sec": 0, 00:08:21.660 "rw_mbytes_per_sec": 0, 00:08:21.660 "r_mbytes_per_sec": 0, 00:08:21.660 "w_mbytes_per_sec": 0 00:08:21.660 }, 00:08:21.660 "claimed": false, 00:08:21.660 "zoned": false, 00:08:21.660 "supported_io_types": { 00:08:21.660 "read": true, 00:08:21.660 "write": true, 00:08:21.660 "unmap": true, 00:08:21.660 "flush": true, 00:08:21.660 "reset": true, 00:08:21.660 "nvme_admin": false, 00:08:21.660 "nvme_io": false, 00:08:21.660 "nvme_io_md": false, 00:08:21.660 "write_zeroes": true, 00:08:21.660 "zcopy": true, 00:08:21.660 "get_zone_info": false, 00:08:21.660 "zone_management": false, 00:08:21.660 "zone_append": false, 00:08:21.660 "compare": false, 00:08:21.660 "compare_and_write": false, 00:08:21.660 "abort": true, 00:08:21.660 "seek_hole": false, 00:08:21.660 "seek_data": false, 00:08:21.660 "copy": true, 00:08:21.660 "nvme_iov_md": false 00:08:21.660 }, 00:08:21.660 "memory_domains": [ 00:08:21.660 { 00:08:21.660 "dma_device_id": "system", 00:08:21.660 "dma_device_type": 1 00:08:21.660 }, 00:08:21.660 { 00:08:21.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.660 "dma_device_type": 2 00:08:21.660 } 00:08:21.660 ], 00:08:21.660 "driver_specific": { 00:08:21.660 "passthru": { 00:08:21.660 "name": "Passthru0", 00:08:21.660 "base_bdev_name": "Malloc2" 00:08:21.660 } 00:08:21.660 } 00:08:21.660 } 00:08:21.660 ]' 00:08:21.660 16:24:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:21.660 16:24:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:21.660 16:24:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:21.660 16:24:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.660 16:24:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:21.660 16:24:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.660 16:24:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:08:21.660 16:24:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.660 16:24:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:21.660 16:24:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.660 16:24:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:21.661 16:24:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.661 16:24:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:21.661 16:24:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.661 16:24:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:21.661 16:24:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:21.661 ************************************ 00:08:21.661 END TEST rpc_daemon_integrity 00:08:21.661 ************************************ 00:08:21.661 16:24:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:21.661 00:08:21.661 real 0m0.312s 00:08:21.661 user 0m0.187s 00:08:21.661 sys 0m0.056s 00:08:21.661 16:24:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.661 16:24:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:21.661 16:24:03 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:21.661 16:24:03 rpc -- rpc/rpc.sh@84 -- # killprocess 69462 00:08:21.661 16:24:03 rpc -- common/autotest_common.sh@954 -- # '[' -z 69462 ']' 00:08:21.661 16:24:03 rpc -- common/autotest_common.sh@958 -- # kill -0 69462 00:08:21.661 16:24:03 rpc -- common/autotest_common.sh@959 -- # uname 00:08:21.661 16:24:03 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:21.661 16:24:03 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69462 00:08:21.661 killing process with pid 69462 00:08:21.661 16:24:03 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:21.661 16:24:03 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:21.661 16:24:03 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69462' 00:08:21.661 16:24:03 rpc -- common/autotest_common.sh@973 -- # kill 69462 00:08:21.661 16:24:03 rpc -- common/autotest_common.sh@978 -- # wait 69462 00:08:22.229 ************************************ 00:08:22.229 END TEST rpc 00:08:22.229 ************************************ 00:08:22.229 00:08:22.229 real 0m2.887s 00:08:22.229 user 0m3.481s 00:08:22.229 sys 0m0.873s 00:08:22.229 16:24:03 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:22.229 16:24:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.229 16:24:03 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:08:22.229 16:24:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:22.229 16:24:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.229 16:24:03 -- common/autotest_common.sh@10 -- # set +x 00:08:22.229 ************************************ 00:08:22.229 START TEST skip_rpc 00:08:22.229 ************************************ 00:08:22.229 16:24:03 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:08:22.229 * Looking for test storage... 00:08:22.229 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:08:22.229 16:24:04 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:22.229 16:24:04 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:08:22.229 16:24:04 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:22.489 16:24:04 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:22.489 16:24:04 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:22.489 16:24:04 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:22.489 16:24:04 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:22.489 16:24:04 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:22.489 16:24:04 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:22.489 16:24:04 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:22.489 16:24:04 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:22.489 16:24:04 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:22.489 16:24:04 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:22.489 16:24:04 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:22.489 16:24:04 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:22.489 16:24:04 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:22.489 16:24:04 skip_rpc -- scripts/common.sh@345 -- # : 1 00:08:22.489 16:24:04 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:22.489 16:24:04 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:22.489 16:24:04 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:22.489 16:24:04 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:08:22.489 16:24:04 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:22.489 16:24:04 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:08:22.489 16:24:04 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:22.489 16:24:04 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:22.489 16:24:04 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:08:22.489 16:24:04 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:22.489 16:24:04 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:08:22.489 16:24:04 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:22.489 16:24:04 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:22.489 16:24:04 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:22.489 16:24:04 skip_rpc -- scripts/common.sh@368 -- # return 0 00:08:22.489 16:24:04 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:22.489 16:24:04 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:22.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.489 --rc genhtml_branch_coverage=1 00:08:22.489 --rc genhtml_function_coverage=1 00:08:22.489 --rc genhtml_legend=1 00:08:22.489 --rc geninfo_all_blocks=1 00:08:22.489 --rc geninfo_unexecuted_blocks=1 00:08:22.489 00:08:22.489 ' 00:08:22.489 16:24:04 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:22.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.489 --rc genhtml_branch_coverage=1 00:08:22.489 --rc genhtml_function_coverage=1 00:08:22.489 --rc genhtml_legend=1 00:08:22.489 --rc geninfo_all_blocks=1 00:08:22.489 --rc geninfo_unexecuted_blocks=1 00:08:22.489 00:08:22.489 ' 00:08:22.489 16:24:04 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:22.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.490 --rc genhtml_branch_coverage=1 00:08:22.490 --rc genhtml_function_coverage=1 00:08:22.490 --rc genhtml_legend=1 00:08:22.490 --rc geninfo_all_blocks=1 00:08:22.490 --rc geninfo_unexecuted_blocks=1 00:08:22.490 00:08:22.490 ' 00:08:22.490 16:24:04 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:22.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.490 --rc genhtml_branch_coverage=1 00:08:22.490 --rc genhtml_function_coverage=1 00:08:22.490 --rc genhtml_legend=1 00:08:22.490 --rc geninfo_all_blocks=1 00:08:22.490 --rc geninfo_unexecuted_blocks=1 00:08:22.490 00:08:22.490 ' 00:08:22.490 16:24:04 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:22.490 16:24:04 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:22.490 16:24:04 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:08:22.490 16:24:04 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:22.490 16:24:04 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.490 16:24:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.490 ************************************ 00:08:22.490 START TEST skip_rpc 00:08:22.490 ************************************ 00:08:22.490 16:24:04 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:08:22.490 16:24:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69669 00:08:22.490 16:24:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:22.490 16:24:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:08:22.490 16:24:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:08:22.490 [2024-12-06 16:24:04.251952] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:08:22.490 [2024-12-06 16:24:04.252079] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69669 ] 00:08:22.748 [2024-12-06 16:24:04.424187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.748 [2024-12-06 16:24:04.449674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.132 16:24:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:08:28.132 16:24:09 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:08:28.132 16:24:09 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:08:28.132 16:24:09 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:28.132 16:24:09 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:28.132 16:24:09 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:28.132 16:24:09 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:28.132 16:24:09 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:08:28.132 16:24:09 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.132 16:24:09 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:28.132 16:24:09 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:28.132 16:24:09 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:08:28.132 16:24:09 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:28.132 16:24:09 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:28.132 16:24:09 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:28.132 16:24:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:08:28.132 16:24:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69669 00:08:28.132 16:24:09 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 69669 ']' 00:08:28.132 16:24:09 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 69669 00:08:28.132 16:24:09 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:08:28.132 16:24:09 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:28.132 16:24:09 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69669 00:08:28.132 16:24:09 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:28.132 16:24:09 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:28.132 16:24:09 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69669' 00:08:28.132 killing process with pid 69669 00:08:28.132 16:24:09 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 69669 00:08:28.132 16:24:09 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 69669 00:08:28.132 00:08:28.132 real 0m5.427s 00:08:28.132 user 0m5.015s 00:08:28.132 sys 0m0.338s 00:08:28.132 16:24:09 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:28.132 16:24:09 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:28.132 ************************************ 00:08:28.132 END TEST skip_rpc 00:08:28.132 ************************************ 00:08:28.132 16:24:09 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:08:28.132 16:24:09 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:28.132 16:24:09 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:28.132 16:24:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:28.132 ************************************ 00:08:28.132 START TEST skip_rpc_with_json 00:08:28.132 ************************************ 00:08:28.132 16:24:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:08:28.132 16:24:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:08:28.132 16:24:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69751 00:08:28.132 16:24:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:28.132 16:24:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:28.132 16:24:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69751 00:08:28.132 16:24:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 69751 ']' 00:08:28.132 16:24:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.132 16:24:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:28.132 16:24:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.132 16:24:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:28.132 16:24:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:28.132 [2024-12-06 16:24:09.737012] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:08:28.132 [2024-12-06 16:24:09.737230] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69751 ] 00:08:28.132 [2024-12-06 16:24:09.909019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.132 [2024-12-06 16:24:09.936268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.081 16:24:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:29.081 16:24:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:08:29.081 16:24:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:08:29.081 16:24:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.081 16:24:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:29.081 [2024-12-06 16:24:10.576579] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:08:29.081 request: 00:08:29.081 { 00:08:29.081 "trtype": "tcp", 00:08:29.081 "method": "nvmf_get_transports", 00:08:29.081 "req_id": 1 00:08:29.081 } 00:08:29.081 Got JSON-RPC error response 00:08:29.081 response: 00:08:29.081 { 00:08:29.081 "code": -19, 00:08:29.081 "message": "No such device" 00:08:29.081 } 00:08:29.081 16:24:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:29.081 16:24:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:08:29.081 16:24:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.081 16:24:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:29.081 [2024-12-06 16:24:10.592671] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:29.081 16:24:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.081 16:24:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:08:29.081 16:24:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.081 16:24:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:29.081 16:24:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.081 16:24:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:29.081 { 00:08:29.081 "subsystems": [ 00:08:29.081 { 00:08:29.081 "subsystem": "fsdev", 00:08:29.081 "config": [ 00:08:29.081 { 00:08:29.081 "method": "fsdev_set_opts", 00:08:29.081 "params": { 00:08:29.081 "fsdev_io_pool_size": 65535, 00:08:29.081 "fsdev_io_cache_size": 256 00:08:29.081 } 00:08:29.081 } 00:08:29.081 ] 00:08:29.081 }, 00:08:29.081 { 00:08:29.081 "subsystem": "keyring", 00:08:29.081 "config": [] 00:08:29.081 }, 00:08:29.081 { 00:08:29.081 "subsystem": "iobuf", 00:08:29.081 "config": [ 00:08:29.081 { 00:08:29.081 "method": "iobuf_set_options", 00:08:29.081 "params": { 00:08:29.081 "small_pool_count": 8192, 00:08:29.081 "large_pool_count": 1024, 00:08:29.081 "small_bufsize": 8192, 00:08:29.081 "large_bufsize": 135168, 00:08:29.081 "enable_numa": false 00:08:29.081 } 00:08:29.081 } 00:08:29.081 ] 00:08:29.081 }, 00:08:29.081 { 00:08:29.081 "subsystem": "sock", 00:08:29.081 "config": [ 00:08:29.081 { 00:08:29.081 "method": "sock_set_default_impl", 00:08:29.081 "params": { 00:08:29.081 "impl_name": "posix" 00:08:29.081 } 00:08:29.081 }, 00:08:29.081 { 00:08:29.081 "method": "sock_impl_set_options", 00:08:29.081 "params": { 00:08:29.081 "impl_name": "ssl", 00:08:29.081 "recv_buf_size": 4096, 00:08:29.081 "send_buf_size": 4096, 00:08:29.081 "enable_recv_pipe": true, 00:08:29.081 "enable_quickack": false, 00:08:29.081 "enable_placement_id": 0, 00:08:29.081 "enable_zerocopy_send_server": true, 00:08:29.081 "enable_zerocopy_send_client": false, 00:08:29.081 "zerocopy_threshold": 0, 00:08:29.081 "tls_version": 0, 00:08:29.081 "enable_ktls": false 00:08:29.081 } 00:08:29.081 }, 00:08:29.081 { 00:08:29.081 "method": "sock_impl_set_options", 00:08:29.081 "params": { 00:08:29.081 "impl_name": "posix", 00:08:29.081 "recv_buf_size": 2097152, 00:08:29.081 "send_buf_size": 2097152, 00:08:29.081 "enable_recv_pipe": true, 00:08:29.081 "enable_quickack": false, 00:08:29.081 "enable_placement_id": 0, 00:08:29.081 "enable_zerocopy_send_server": true, 00:08:29.081 "enable_zerocopy_send_client": false, 00:08:29.081 "zerocopy_threshold": 0, 00:08:29.081 "tls_version": 0, 00:08:29.081 "enable_ktls": false 00:08:29.081 } 00:08:29.081 } 00:08:29.081 ] 00:08:29.081 }, 00:08:29.081 { 00:08:29.081 "subsystem": "vmd", 00:08:29.081 "config": [] 00:08:29.081 }, 00:08:29.081 { 00:08:29.081 "subsystem": "accel", 00:08:29.081 "config": [ 00:08:29.081 { 00:08:29.081 "method": "accel_set_options", 00:08:29.081 "params": { 00:08:29.081 "small_cache_size": 128, 00:08:29.081 "large_cache_size": 16, 00:08:29.081 "task_count": 2048, 00:08:29.081 "sequence_count": 2048, 00:08:29.081 "buf_count": 2048 00:08:29.081 } 00:08:29.081 } 00:08:29.081 ] 00:08:29.081 }, 00:08:29.081 { 00:08:29.081 "subsystem": "bdev", 00:08:29.082 "config": [ 00:08:29.082 { 00:08:29.082 "method": "bdev_set_options", 00:08:29.082 "params": { 00:08:29.082 "bdev_io_pool_size": 65535, 00:08:29.082 "bdev_io_cache_size": 256, 00:08:29.082 "bdev_auto_examine": true, 00:08:29.082 "iobuf_small_cache_size": 128, 00:08:29.082 "iobuf_large_cache_size": 16 00:08:29.082 } 00:08:29.082 }, 00:08:29.082 { 00:08:29.082 "method": "bdev_raid_set_options", 00:08:29.082 "params": { 00:08:29.082 "process_window_size_kb": 1024, 00:08:29.082 "process_max_bandwidth_mb_sec": 0 00:08:29.082 } 00:08:29.082 }, 00:08:29.082 { 00:08:29.082 "method": "bdev_iscsi_set_options", 00:08:29.082 "params": { 00:08:29.082 "timeout_sec": 30 00:08:29.082 } 00:08:29.082 }, 00:08:29.082 { 00:08:29.082 "method": "bdev_nvme_set_options", 00:08:29.082 "params": { 00:08:29.082 "action_on_timeout": "none", 00:08:29.082 "timeout_us": 0, 00:08:29.082 "timeout_admin_us": 0, 00:08:29.082 "keep_alive_timeout_ms": 10000, 00:08:29.082 "arbitration_burst": 0, 00:08:29.082 "low_priority_weight": 0, 00:08:29.082 "medium_priority_weight": 0, 00:08:29.082 "high_priority_weight": 0, 00:08:29.082 "nvme_adminq_poll_period_us": 10000, 00:08:29.082 "nvme_ioq_poll_period_us": 0, 00:08:29.082 "io_queue_requests": 0, 00:08:29.082 "delay_cmd_submit": true, 00:08:29.082 "transport_retry_count": 4, 00:08:29.082 "bdev_retry_count": 3, 00:08:29.082 "transport_ack_timeout": 0, 00:08:29.082 "ctrlr_loss_timeout_sec": 0, 00:08:29.082 "reconnect_delay_sec": 0, 00:08:29.082 "fast_io_fail_timeout_sec": 0, 00:08:29.082 "disable_auto_failback": false, 00:08:29.082 "generate_uuids": false, 00:08:29.082 "transport_tos": 0, 00:08:29.082 "nvme_error_stat": false, 00:08:29.082 "rdma_srq_size": 0, 00:08:29.082 "io_path_stat": false, 00:08:29.082 "allow_accel_sequence": false, 00:08:29.082 "rdma_max_cq_size": 0, 00:08:29.082 "rdma_cm_event_timeout_ms": 0, 00:08:29.082 "dhchap_digests": [ 00:08:29.082 "sha256", 00:08:29.082 "sha384", 00:08:29.082 "sha512" 00:08:29.082 ], 00:08:29.082 "dhchap_dhgroups": [ 00:08:29.082 "null", 00:08:29.082 "ffdhe2048", 00:08:29.082 "ffdhe3072", 00:08:29.082 "ffdhe4096", 00:08:29.082 "ffdhe6144", 00:08:29.082 "ffdhe8192" 00:08:29.082 ] 00:08:29.082 } 00:08:29.082 }, 00:08:29.082 { 00:08:29.082 "method": "bdev_nvme_set_hotplug", 00:08:29.082 "params": { 00:08:29.082 "period_us": 100000, 00:08:29.082 "enable": false 00:08:29.082 } 00:08:29.082 }, 00:08:29.082 { 00:08:29.082 "method": "bdev_wait_for_examine" 00:08:29.082 } 00:08:29.082 ] 00:08:29.082 }, 00:08:29.082 { 00:08:29.082 "subsystem": "scsi", 00:08:29.082 "config": null 00:08:29.082 }, 00:08:29.082 { 00:08:29.082 "subsystem": "scheduler", 00:08:29.082 "config": [ 00:08:29.082 { 00:08:29.082 "method": "framework_set_scheduler", 00:08:29.082 "params": { 00:08:29.082 "name": "static" 00:08:29.082 } 00:08:29.082 } 00:08:29.082 ] 00:08:29.082 }, 00:08:29.082 { 00:08:29.082 "subsystem": "vhost_scsi", 00:08:29.082 "config": [] 00:08:29.082 }, 00:08:29.082 { 00:08:29.082 "subsystem": "vhost_blk", 00:08:29.082 "config": [] 00:08:29.082 }, 00:08:29.082 { 00:08:29.082 "subsystem": "ublk", 00:08:29.082 "config": [] 00:08:29.082 }, 00:08:29.082 { 00:08:29.082 "subsystem": "nbd", 00:08:29.082 "config": [] 00:08:29.082 }, 00:08:29.082 { 00:08:29.082 "subsystem": "nvmf", 00:08:29.082 "config": [ 00:08:29.082 { 00:08:29.082 "method": "nvmf_set_config", 00:08:29.082 "params": { 00:08:29.082 "discovery_filter": "match_any", 00:08:29.082 "admin_cmd_passthru": { 00:08:29.082 "identify_ctrlr": false 00:08:29.082 }, 00:08:29.082 "dhchap_digests": [ 00:08:29.082 "sha256", 00:08:29.082 "sha384", 00:08:29.082 "sha512" 00:08:29.082 ], 00:08:29.082 "dhchap_dhgroups": [ 00:08:29.082 "null", 00:08:29.082 "ffdhe2048", 00:08:29.082 "ffdhe3072", 00:08:29.082 "ffdhe4096", 00:08:29.082 "ffdhe6144", 00:08:29.082 "ffdhe8192" 00:08:29.082 ] 00:08:29.082 } 00:08:29.082 }, 00:08:29.082 { 00:08:29.082 "method": "nvmf_set_max_subsystems", 00:08:29.082 "params": { 00:08:29.082 "max_subsystems": 1024 00:08:29.082 } 00:08:29.082 }, 00:08:29.082 { 00:08:29.082 "method": "nvmf_set_crdt", 00:08:29.082 "params": { 00:08:29.082 "crdt1": 0, 00:08:29.082 "crdt2": 0, 00:08:29.082 "crdt3": 0 00:08:29.082 } 00:08:29.082 }, 00:08:29.082 { 00:08:29.082 "method": "nvmf_create_transport", 00:08:29.082 "params": { 00:08:29.082 "trtype": "TCP", 00:08:29.082 "max_queue_depth": 128, 00:08:29.082 "max_io_qpairs_per_ctrlr": 127, 00:08:29.082 "in_capsule_data_size": 4096, 00:08:29.082 "max_io_size": 131072, 00:08:29.082 "io_unit_size": 131072, 00:08:29.082 "max_aq_depth": 128, 00:08:29.082 "num_shared_buffers": 511, 00:08:29.082 "buf_cache_size": 4294967295, 00:08:29.082 "dif_insert_or_strip": false, 00:08:29.082 "zcopy": false, 00:08:29.082 "c2h_success": true, 00:08:29.082 "sock_priority": 0, 00:08:29.082 "abort_timeout_sec": 1, 00:08:29.082 "ack_timeout": 0, 00:08:29.082 "data_wr_pool_size": 0 00:08:29.082 } 00:08:29.082 } 00:08:29.082 ] 00:08:29.082 }, 00:08:29.082 { 00:08:29.082 "subsystem": "iscsi", 00:08:29.082 "config": [ 00:08:29.082 { 00:08:29.082 "method": "iscsi_set_options", 00:08:29.082 "params": { 00:08:29.082 "node_base": "iqn.2016-06.io.spdk", 00:08:29.082 "max_sessions": 128, 00:08:29.082 "max_connections_per_session": 2, 00:08:29.082 "max_queue_depth": 64, 00:08:29.082 "default_time2wait": 2, 00:08:29.082 "default_time2retain": 20, 00:08:29.082 "first_burst_length": 8192, 00:08:29.082 "immediate_data": true, 00:08:29.082 "allow_duplicated_isid": false, 00:08:29.082 "error_recovery_level": 0, 00:08:29.082 "nop_timeout": 60, 00:08:29.082 "nop_in_interval": 30, 00:08:29.082 "disable_chap": false, 00:08:29.082 "require_chap": false, 00:08:29.082 "mutual_chap": false, 00:08:29.082 "chap_group": 0, 00:08:29.082 "max_large_datain_per_connection": 64, 00:08:29.082 "max_r2t_per_connection": 4, 00:08:29.082 "pdu_pool_size": 36864, 00:08:29.082 "immediate_data_pool_size": 16384, 00:08:29.082 "data_out_pool_size": 2048 00:08:29.082 } 00:08:29.082 } 00:08:29.082 ] 00:08:29.082 } 00:08:29.082 ] 00:08:29.082 } 00:08:29.082 16:24:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:29.082 16:24:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69751 00:08:29.082 16:24:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 69751 ']' 00:08:29.082 16:24:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 69751 00:08:29.082 16:24:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:08:29.082 16:24:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:29.082 16:24:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69751 00:08:29.082 killing process with pid 69751 00:08:29.082 16:24:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:29.082 16:24:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:29.082 16:24:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69751' 00:08:29.082 16:24:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 69751 00:08:29.082 16:24:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 69751 00:08:29.343 16:24:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69785 00:08:29.343 16:24:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:29.343 16:24:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:08:34.619 16:24:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69785 00:08:34.619 16:24:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 69785 ']' 00:08:34.619 16:24:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 69785 00:08:34.619 16:24:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:08:34.619 16:24:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:34.619 16:24:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69785 00:08:34.619 killing process with pid 69785 00:08:34.619 16:24:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:34.619 16:24:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:34.619 16:24:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69785' 00:08:34.619 16:24:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 69785 00:08:34.619 16:24:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 69785 00:08:34.877 16:24:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:34.877 16:24:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:34.877 ************************************ 00:08:34.877 END TEST skip_rpc_with_json 00:08:34.877 ************************************ 00:08:34.877 00:08:34.877 real 0m6.944s 00:08:34.877 user 0m6.529s 00:08:34.877 sys 0m0.712s 00:08:34.877 16:24:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.877 16:24:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:34.877 16:24:16 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:08:34.877 16:24:16 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:34.877 16:24:16 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.877 16:24:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:34.877 ************************************ 00:08:34.877 START TEST skip_rpc_with_delay 00:08:34.877 ************************************ 00:08:34.877 16:24:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:08:34.877 16:24:16 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:34.877 16:24:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:08:34.877 16:24:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:34.877 16:24:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:34.877 16:24:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.877 16:24:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:34.877 16:24:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.877 16:24:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:34.877 16:24:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.877 16:24:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:34.877 16:24:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:08:34.877 16:24:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:35.136 [2024-12-06 16:24:16.754617] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:08:35.136 16:24:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:08:35.136 16:24:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:35.136 16:24:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:35.136 ************************************ 00:08:35.136 END TEST skip_rpc_with_delay 00:08:35.136 ************************************ 00:08:35.136 16:24:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:35.136 00:08:35.136 real 0m0.165s 00:08:35.136 user 0m0.096s 00:08:35.136 sys 0m0.067s 00:08:35.136 16:24:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.136 16:24:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:08:35.136 16:24:16 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:08:35.136 16:24:16 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:08:35.136 16:24:16 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:08:35.136 16:24:16 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:35.136 16:24:16 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.136 16:24:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:35.136 ************************************ 00:08:35.136 START TEST exit_on_failed_rpc_init 00:08:35.136 ************************************ 00:08:35.136 16:24:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:08:35.136 16:24:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69891 00:08:35.136 16:24:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:35.136 16:24:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69891 00:08:35.136 16:24:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 69891 ']' 00:08:35.136 16:24:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.136 16:24:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:35.136 16:24:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.136 16:24:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:35.136 16:24:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:35.395 [2024-12-06 16:24:16.982909] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:08:35.395 [2024-12-06 16:24:16.983045] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69891 ] 00:08:35.395 [2024-12-06 16:24:17.151324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.395 [2024-12-06 16:24:17.178021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.332 16:24:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:36.332 16:24:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:08:36.332 16:24:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:36.332 16:24:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:36.333 16:24:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:08:36.333 16:24:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:36.333 16:24:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:36.333 16:24:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:36.333 16:24:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:36.333 16:24:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:36.333 16:24:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:36.333 16:24:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:36.333 16:24:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:36.333 16:24:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:08:36.333 16:24:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:36.333 [2024-12-06 16:24:17.915321] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:08:36.333 [2024-12-06 16:24:17.915518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69909 ] 00:08:36.333 [2024-12-06 16:24:18.081419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.333 [2024-12-06 16:24:18.117745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.333 [2024-12-06 16:24:18.117950] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:08:36.333 [2024-12-06 16:24:18.118064] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:08:36.333 [2024-12-06 16:24:18.118113] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:36.592 16:24:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:08:36.592 16:24:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:36.592 16:24:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:08:36.592 16:24:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:08:36.592 16:24:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:08:36.592 16:24:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:36.592 16:24:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:36.592 16:24:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69891 00:08:36.592 16:24:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 69891 ']' 00:08:36.592 16:24:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 69891 00:08:36.592 16:24:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:08:36.592 16:24:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:36.592 16:24:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69891 00:08:36.592 16:24:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:36.592 16:24:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:36.592 16:24:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69891' 00:08:36.592 killing process with pid 69891 00:08:36.592 16:24:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 69891 00:08:36.592 16:24:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 69891 00:08:36.851 00:08:36.852 real 0m1.754s 00:08:36.852 user 0m1.910s 00:08:36.852 sys 0m0.493s 00:08:36.852 16:24:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.852 16:24:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:36.852 ************************************ 00:08:36.852 END TEST exit_on_failed_rpc_init 00:08:36.852 ************************************ 00:08:37.112 16:24:18 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:37.112 00:08:37.112 real 0m14.783s 00:08:37.112 user 0m13.756s 00:08:37.112 sys 0m1.913s 00:08:37.112 16:24:18 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.112 ************************************ 00:08:37.112 END TEST skip_rpc 00:08:37.112 ************************************ 00:08:37.112 16:24:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.112 16:24:18 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:37.112 16:24:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:37.112 16:24:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.112 16:24:18 -- common/autotest_common.sh@10 -- # set +x 00:08:37.112 ************************************ 00:08:37.112 START TEST rpc_client 00:08:37.112 ************************************ 00:08:37.112 16:24:18 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:37.112 * Looking for test storage... 00:08:37.112 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:08:37.112 16:24:18 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:37.112 16:24:18 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:37.112 16:24:18 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:08:37.372 16:24:18 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:37.372 16:24:18 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:37.372 16:24:18 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:37.372 16:24:18 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:37.372 16:24:18 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.372 16:24:18 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:08:37.372 16:24:18 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:08:37.372 16:24:18 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:08:37.372 16:24:18 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:08:37.372 16:24:18 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:08:37.372 16:24:18 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:08:37.372 16:24:18 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:37.372 16:24:18 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:08:37.372 16:24:18 rpc_client -- scripts/common.sh@345 -- # : 1 00:08:37.372 16:24:18 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:37.372 16:24:18 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.372 16:24:18 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:08:37.372 16:24:18 rpc_client -- scripts/common.sh@353 -- # local d=1 00:08:37.372 16:24:18 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.372 16:24:18 rpc_client -- scripts/common.sh@355 -- # echo 1 00:08:37.372 16:24:18 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:08:37.372 16:24:18 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:08:37.372 16:24:18 rpc_client -- scripts/common.sh@353 -- # local d=2 00:08:37.372 16:24:18 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.372 16:24:18 rpc_client -- scripts/common.sh@355 -- # echo 2 00:08:37.372 16:24:18 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:08:37.372 16:24:18 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:37.372 16:24:18 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:37.372 16:24:18 rpc_client -- scripts/common.sh@368 -- # return 0 00:08:37.372 16:24:18 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.372 16:24:18 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:37.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.372 --rc genhtml_branch_coverage=1 00:08:37.372 --rc genhtml_function_coverage=1 00:08:37.372 --rc genhtml_legend=1 00:08:37.372 --rc geninfo_all_blocks=1 00:08:37.372 --rc geninfo_unexecuted_blocks=1 00:08:37.372 00:08:37.372 ' 00:08:37.372 16:24:18 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:37.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.372 --rc genhtml_branch_coverage=1 00:08:37.372 --rc genhtml_function_coverage=1 00:08:37.372 --rc genhtml_legend=1 00:08:37.372 --rc geninfo_all_blocks=1 00:08:37.372 --rc geninfo_unexecuted_blocks=1 00:08:37.372 00:08:37.372 ' 00:08:37.372 16:24:18 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:37.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.372 --rc genhtml_branch_coverage=1 00:08:37.372 --rc genhtml_function_coverage=1 00:08:37.372 --rc genhtml_legend=1 00:08:37.372 --rc geninfo_all_blocks=1 00:08:37.372 --rc geninfo_unexecuted_blocks=1 00:08:37.372 00:08:37.372 ' 00:08:37.372 16:24:18 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:37.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.372 --rc genhtml_branch_coverage=1 00:08:37.372 --rc genhtml_function_coverage=1 00:08:37.372 --rc genhtml_legend=1 00:08:37.372 --rc geninfo_all_blocks=1 00:08:37.372 --rc geninfo_unexecuted_blocks=1 00:08:37.372 00:08:37.372 ' 00:08:37.372 16:24:18 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:08:37.372 OK 00:08:37.372 16:24:19 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:37.372 00:08:37.372 real 0m0.275s 00:08:37.372 user 0m0.152s 00:08:37.372 sys 0m0.138s 00:08:37.372 16:24:19 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.372 16:24:19 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:08:37.372 ************************************ 00:08:37.372 END TEST rpc_client 00:08:37.372 ************************************ 00:08:37.372 16:24:19 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:37.372 16:24:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:37.372 16:24:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.372 16:24:19 -- common/autotest_common.sh@10 -- # set +x 00:08:37.372 ************************************ 00:08:37.372 START TEST json_config 00:08:37.372 ************************************ 00:08:37.372 16:24:19 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:37.372 16:24:19 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:37.373 16:24:19 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:37.373 16:24:19 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:08:37.633 16:24:19 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:37.633 16:24:19 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:37.633 16:24:19 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:37.633 16:24:19 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:37.633 16:24:19 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.633 16:24:19 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:08:37.633 16:24:19 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:08:37.633 16:24:19 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:08:37.633 16:24:19 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:08:37.633 16:24:19 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:08:37.633 16:24:19 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:08:37.633 16:24:19 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:37.633 16:24:19 json_config -- scripts/common.sh@344 -- # case "$op" in 00:08:37.633 16:24:19 json_config -- scripts/common.sh@345 -- # : 1 00:08:37.633 16:24:19 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:37.633 16:24:19 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.633 16:24:19 json_config -- scripts/common.sh@365 -- # decimal 1 00:08:37.633 16:24:19 json_config -- scripts/common.sh@353 -- # local d=1 00:08:37.633 16:24:19 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.633 16:24:19 json_config -- scripts/common.sh@355 -- # echo 1 00:08:37.633 16:24:19 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:08:37.633 16:24:19 json_config -- scripts/common.sh@366 -- # decimal 2 00:08:37.633 16:24:19 json_config -- scripts/common.sh@353 -- # local d=2 00:08:37.633 16:24:19 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.633 16:24:19 json_config -- scripts/common.sh@355 -- # echo 2 00:08:37.633 16:24:19 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:08:37.633 16:24:19 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:37.633 16:24:19 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:37.633 16:24:19 json_config -- scripts/common.sh@368 -- # return 0 00:08:37.633 16:24:19 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.633 16:24:19 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:37.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.633 --rc genhtml_branch_coverage=1 00:08:37.633 --rc genhtml_function_coverage=1 00:08:37.633 --rc genhtml_legend=1 00:08:37.633 --rc geninfo_all_blocks=1 00:08:37.633 --rc geninfo_unexecuted_blocks=1 00:08:37.633 00:08:37.633 ' 00:08:37.633 16:24:19 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:37.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.633 --rc genhtml_branch_coverage=1 00:08:37.633 --rc genhtml_function_coverage=1 00:08:37.633 --rc genhtml_legend=1 00:08:37.633 --rc geninfo_all_blocks=1 00:08:37.633 --rc geninfo_unexecuted_blocks=1 00:08:37.633 00:08:37.633 ' 00:08:37.633 16:24:19 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:37.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.633 --rc genhtml_branch_coverage=1 00:08:37.633 --rc genhtml_function_coverage=1 00:08:37.633 --rc genhtml_legend=1 00:08:37.633 --rc geninfo_all_blocks=1 00:08:37.633 --rc geninfo_unexecuted_blocks=1 00:08:37.633 00:08:37.633 ' 00:08:37.633 16:24:19 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:37.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.633 --rc genhtml_branch_coverage=1 00:08:37.633 --rc genhtml_function_coverage=1 00:08:37.633 --rc genhtml_legend=1 00:08:37.633 --rc geninfo_all_blocks=1 00:08:37.633 --rc geninfo_unexecuted_blocks=1 00:08:37.633 00:08:37.633 ' 00:08:37.633 16:24:19 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:37.633 16:24:19 json_config -- nvmf/common.sh@7 -- # uname -s 00:08:37.633 16:24:19 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:37.633 16:24:19 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:37.633 16:24:19 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:37.633 16:24:19 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:37.633 16:24:19 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:37.633 16:24:19 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:37.633 16:24:19 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:37.633 16:24:19 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:37.633 16:24:19 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:37.633 16:24:19 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:37.633 16:24:19 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4b60f70-3bfd-4379-bb78-1dcb5629a12f 00:08:37.633 16:24:19 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=b4b60f70-3bfd-4379-bb78-1dcb5629a12f 00:08:37.633 16:24:19 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:37.633 16:24:19 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:37.633 16:24:19 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:37.633 16:24:19 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:37.633 16:24:19 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:37.633 16:24:19 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:08:37.633 16:24:19 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:37.633 16:24:19 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:37.633 16:24:19 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:37.634 16:24:19 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.634 16:24:19 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.634 16:24:19 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.634 16:24:19 json_config -- paths/export.sh@5 -- # export PATH 00:08:37.634 16:24:19 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.634 16:24:19 json_config -- nvmf/common.sh@51 -- # : 0 00:08:37.634 16:24:19 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:37.634 16:24:19 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:37.634 16:24:19 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:37.634 16:24:19 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:37.634 16:24:19 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:37.634 16:24:19 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:37.634 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:37.634 16:24:19 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:37.634 16:24:19 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:37.634 16:24:19 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:37.634 16:24:19 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:08:37.634 16:24:19 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:08:37.634 16:24:19 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:08:37.634 16:24:19 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:08:37.634 16:24:19 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:37.634 16:24:19 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:08:37.634 WARNING: No tests are enabled so not running JSON configuration tests 00:08:37.634 16:24:19 json_config -- json_config/json_config.sh@28 -- # exit 0 00:08:37.634 00:08:37.634 real 0m0.227s 00:08:37.634 user 0m0.142s 00:08:37.634 sys 0m0.084s 00:08:37.634 16:24:19 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.634 16:24:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:37.634 ************************************ 00:08:37.634 END TEST json_config 00:08:37.634 ************************************ 00:08:37.634 16:24:19 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:37.634 16:24:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:37.634 16:24:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.634 16:24:19 -- common/autotest_common.sh@10 -- # set +x 00:08:37.634 ************************************ 00:08:37.634 START TEST json_config_extra_key 00:08:37.634 ************************************ 00:08:37.634 16:24:19 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:37.894 16:24:19 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:37.894 16:24:19 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:08:37.894 16:24:19 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:37.894 16:24:19 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:37.894 16:24:19 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:37.894 16:24:19 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:37.894 16:24:19 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:37.894 16:24:19 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.894 16:24:19 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:08:37.894 16:24:19 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:08:37.894 16:24:19 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:08:37.894 16:24:19 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:08:37.894 16:24:19 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:08:37.894 16:24:19 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:08:37.894 16:24:19 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:37.894 16:24:19 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:08:37.894 16:24:19 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:08:37.894 16:24:19 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:37.894 16:24:19 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.894 16:24:19 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:08:37.894 16:24:19 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:08:37.894 16:24:19 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.894 16:24:19 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:08:37.894 16:24:19 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:08:37.894 16:24:19 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:08:37.894 16:24:19 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:08:37.894 16:24:19 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.894 16:24:19 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:08:37.894 16:24:19 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:08:37.894 16:24:19 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:37.894 16:24:19 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:37.894 16:24:19 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:08:37.894 16:24:19 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.894 16:24:19 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:37.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.894 --rc genhtml_branch_coverage=1 00:08:37.894 --rc genhtml_function_coverage=1 00:08:37.894 --rc genhtml_legend=1 00:08:37.894 --rc geninfo_all_blocks=1 00:08:37.894 --rc geninfo_unexecuted_blocks=1 00:08:37.894 00:08:37.894 ' 00:08:37.894 16:24:19 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:37.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.894 --rc genhtml_branch_coverage=1 00:08:37.894 --rc genhtml_function_coverage=1 00:08:37.894 --rc genhtml_legend=1 00:08:37.894 --rc geninfo_all_blocks=1 00:08:37.894 --rc geninfo_unexecuted_blocks=1 00:08:37.894 00:08:37.894 ' 00:08:37.894 16:24:19 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:37.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.894 --rc genhtml_branch_coverage=1 00:08:37.894 --rc genhtml_function_coverage=1 00:08:37.894 --rc genhtml_legend=1 00:08:37.894 --rc geninfo_all_blocks=1 00:08:37.894 --rc geninfo_unexecuted_blocks=1 00:08:37.894 00:08:37.894 ' 00:08:37.894 16:24:19 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:37.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.894 --rc genhtml_branch_coverage=1 00:08:37.894 --rc genhtml_function_coverage=1 00:08:37.894 --rc genhtml_legend=1 00:08:37.894 --rc geninfo_all_blocks=1 00:08:37.894 --rc geninfo_unexecuted_blocks=1 00:08:37.894 00:08:37.894 ' 00:08:37.894 16:24:19 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:37.894 16:24:19 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:08:37.894 16:24:19 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:37.895 16:24:19 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:37.895 16:24:19 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:37.895 16:24:19 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:37.895 16:24:19 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:37.895 16:24:19 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:37.895 16:24:19 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:37.895 16:24:19 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:37.895 16:24:19 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:37.895 16:24:19 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:37.895 16:24:19 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4b60f70-3bfd-4379-bb78-1dcb5629a12f 00:08:37.895 16:24:19 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=b4b60f70-3bfd-4379-bb78-1dcb5629a12f 00:08:37.895 16:24:19 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:37.895 16:24:19 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:37.895 16:24:19 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:37.895 16:24:19 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:37.895 16:24:19 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:37.895 16:24:19 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:08:37.895 16:24:19 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:37.895 16:24:19 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:37.895 16:24:19 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:37.895 16:24:19 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.895 16:24:19 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.895 16:24:19 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.895 16:24:19 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:08:37.895 16:24:19 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.895 16:24:19 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:08:37.895 16:24:19 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:37.895 16:24:19 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:37.895 16:24:19 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:37.895 16:24:19 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:37.895 16:24:19 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:37.895 16:24:19 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:37.895 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:37.895 16:24:19 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:37.895 16:24:19 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:37.895 16:24:19 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:37.895 16:24:19 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:08:37.895 16:24:19 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:08:37.895 16:24:19 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:08:37.895 16:24:19 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:08:37.895 16:24:19 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:08:37.895 16:24:19 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:08:37.895 16:24:19 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:08:37.895 16:24:19 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:08:37.895 16:24:19 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:08:37.895 16:24:19 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:37.895 16:24:19 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:08:37.895 INFO: launching applications... 00:08:37.895 16:24:19 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:37.895 16:24:19 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:08:37.895 16:24:19 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:08:37.895 16:24:19 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:37.895 16:24:19 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:37.895 16:24:19 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:08:37.895 16:24:19 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:37.895 16:24:19 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:37.895 16:24:19 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=70097 00:08:37.895 16:24:19 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:37.895 Waiting for target to run... 00:08:37.895 16:24:19 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 70097 /var/tmp/spdk_tgt.sock 00:08:37.895 16:24:19 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 70097 ']' 00:08:37.895 16:24:19 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:37.895 16:24:19 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:37.895 16:24:19 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:37.895 16:24:19 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:37.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:37.895 16:24:19 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:37.895 16:24:19 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:37.895 [2024-12-06 16:24:19.718983] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:08:37.895 [2024-12-06 16:24:19.719224] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70097 ] 00:08:38.466 [2024-12-06 16:24:20.090747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.466 [2024-12-06 16:24:20.109096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.036 16:24:20 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:39.036 16:24:20 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:08:39.036 16:24:20 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:08:39.036 00:08:39.036 INFO: shutting down applications... 00:08:39.036 16:24:20 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:08:39.036 16:24:20 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:08:39.036 16:24:20 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:08:39.036 16:24:20 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:39.036 16:24:20 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 70097 ]] 00:08:39.036 16:24:20 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 70097 00:08:39.036 16:24:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:39.036 16:24:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:39.036 16:24:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70097 00:08:39.036 16:24:20 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:39.295 16:24:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:39.295 16:24:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:39.295 16:24:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70097 00:08:39.295 16:24:21 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:39.295 16:24:21 json_config_extra_key -- json_config/common.sh@43 -- # break 00:08:39.295 16:24:21 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:39.295 16:24:21 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:39.295 SPDK target shutdown done 00:08:39.295 Success 00:08:39.295 16:24:21 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:08:39.295 00:08:39.295 real 0m1.714s 00:08:39.295 user 0m1.468s 00:08:39.295 sys 0m0.493s 00:08:39.295 16:24:21 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:39.295 16:24:21 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:39.296 ************************************ 00:08:39.296 END TEST json_config_extra_key 00:08:39.296 ************************************ 00:08:39.555 16:24:21 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:39.556 16:24:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:39.556 16:24:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.556 16:24:21 -- common/autotest_common.sh@10 -- # set +x 00:08:39.556 ************************************ 00:08:39.556 START TEST alias_rpc 00:08:39.556 ************************************ 00:08:39.556 16:24:21 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:39.556 * Looking for test storage... 00:08:39.556 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:08:39.556 16:24:21 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:39.556 16:24:21 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:08:39.556 16:24:21 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:39.556 16:24:21 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:39.556 16:24:21 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:39.556 16:24:21 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:39.556 16:24:21 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:39.556 16:24:21 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:39.556 16:24:21 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:39.556 16:24:21 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:39.556 16:24:21 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:39.556 16:24:21 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:39.556 16:24:21 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:39.556 16:24:21 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:39.556 16:24:21 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:39.556 16:24:21 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:39.556 16:24:21 alias_rpc -- scripts/common.sh@345 -- # : 1 00:08:39.556 16:24:21 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:39.556 16:24:21 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:39.556 16:24:21 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:39.556 16:24:21 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:08:39.556 16:24:21 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:39.556 16:24:21 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:08:39.556 16:24:21 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:39.556 16:24:21 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:39.816 16:24:21 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:08:39.816 16:24:21 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:39.816 16:24:21 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:08:39.816 16:24:21 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:39.816 16:24:21 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:39.816 16:24:21 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:39.816 16:24:21 alias_rpc -- scripts/common.sh@368 -- # return 0 00:08:39.816 16:24:21 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:39.816 16:24:21 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:39.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.816 --rc genhtml_branch_coverage=1 00:08:39.816 --rc genhtml_function_coverage=1 00:08:39.816 --rc genhtml_legend=1 00:08:39.816 --rc geninfo_all_blocks=1 00:08:39.816 --rc geninfo_unexecuted_blocks=1 00:08:39.816 00:08:39.816 ' 00:08:39.816 16:24:21 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:39.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.816 --rc genhtml_branch_coverage=1 00:08:39.816 --rc genhtml_function_coverage=1 00:08:39.816 --rc genhtml_legend=1 00:08:39.816 --rc geninfo_all_blocks=1 00:08:39.816 --rc geninfo_unexecuted_blocks=1 00:08:39.816 00:08:39.816 ' 00:08:39.816 16:24:21 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:39.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.816 --rc genhtml_branch_coverage=1 00:08:39.816 --rc genhtml_function_coverage=1 00:08:39.816 --rc genhtml_legend=1 00:08:39.816 --rc geninfo_all_blocks=1 00:08:39.816 --rc geninfo_unexecuted_blocks=1 00:08:39.816 00:08:39.816 ' 00:08:39.816 16:24:21 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:39.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.816 --rc genhtml_branch_coverage=1 00:08:39.816 --rc genhtml_function_coverage=1 00:08:39.816 --rc genhtml_legend=1 00:08:39.816 --rc geninfo_all_blocks=1 00:08:39.816 --rc geninfo_unexecuted_blocks=1 00:08:39.816 00:08:39.816 ' 00:08:39.816 16:24:21 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:39.816 16:24:21 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=70176 00:08:39.816 16:24:21 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:39.816 16:24:21 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 70176 00:08:39.816 16:24:21 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 70176 ']' 00:08:39.816 16:24:21 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.816 16:24:21 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:39.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.816 16:24:21 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.816 16:24:21 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:39.816 16:24:21 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:39.816 [2024-12-06 16:24:21.499727] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:08:39.816 [2024-12-06 16:24:21.499872] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70176 ] 00:08:40.076 [2024-12-06 16:24:21.676293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.076 [2024-12-06 16:24:21.703407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.645 16:24:22 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:40.645 16:24:22 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:40.645 16:24:22 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:08:40.904 16:24:22 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 70176 00:08:40.904 16:24:22 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 70176 ']' 00:08:40.904 16:24:22 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 70176 00:08:40.904 16:24:22 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:08:40.904 16:24:22 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:40.904 16:24:22 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70176 00:08:40.904 16:24:22 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:40.904 16:24:22 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:40.904 killing process with pid 70176 00:08:40.904 16:24:22 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70176' 00:08:40.904 16:24:22 alias_rpc -- common/autotest_common.sh@973 -- # kill 70176 00:08:40.904 16:24:22 alias_rpc -- common/autotest_common.sh@978 -- # wait 70176 00:08:41.164 00:08:41.164 real 0m1.785s 00:08:41.164 user 0m1.812s 00:08:41.164 sys 0m0.521s 00:08:41.164 16:24:22 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.164 16:24:22 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:41.164 ************************************ 00:08:41.164 END TEST alias_rpc 00:08:41.164 ************************************ 00:08:41.423 16:24:23 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:08:41.423 16:24:23 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:41.423 16:24:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:41.423 16:24:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.423 16:24:23 -- common/autotest_common.sh@10 -- # set +x 00:08:41.423 ************************************ 00:08:41.423 START TEST spdkcli_tcp 00:08:41.423 ************************************ 00:08:41.423 16:24:23 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:41.423 * Looking for test storage... 00:08:41.423 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:08:41.423 16:24:23 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:41.423 16:24:23 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:08:41.423 16:24:23 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:41.423 16:24:23 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:41.423 16:24:23 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:41.423 16:24:23 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:41.423 16:24:23 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:41.423 16:24:23 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:41.423 16:24:23 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:41.423 16:24:23 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:41.423 16:24:23 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:41.423 16:24:23 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:41.423 16:24:23 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:41.423 16:24:23 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:41.423 16:24:23 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:41.423 16:24:23 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:41.423 16:24:23 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:08:41.423 16:24:23 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:41.423 16:24:23 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:41.423 16:24:23 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:41.423 16:24:23 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:08:41.423 16:24:23 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:41.423 16:24:23 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:08:41.423 16:24:23 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:41.423 16:24:23 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:41.424 16:24:23 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:08:41.424 16:24:23 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:41.424 16:24:23 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:08:41.424 16:24:23 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:41.424 16:24:23 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:41.424 16:24:23 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:41.424 16:24:23 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:08:41.424 16:24:23 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:41.424 16:24:23 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:41.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.424 --rc genhtml_branch_coverage=1 00:08:41.424 --rc genhtml_function_coverage=1 00:08:41.424 --rc genhtml_legend=1 00:08:41.424 --rc geninfo_all_blocks=1 00:08:41.424 --rc geninfo_unexecuted_blocks=1 00:08:41.424 00:08:41.424 ' 00:08:41.424 16:24:23 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:41.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.424 --rc genhtml_branch_coverage=1 00:08:41.424 --rc genhtml_function_coverage=1 00:08:41.424 --rc genhtml_legend=1 00:08:41.424 --rc geninfo_all_blocks=1 00:08:41.424 --rc geninfo_unexecuted_blocks=1 00:08:41.424 00:08:41.424 ' 00:08:41.424 16:24:23 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:41.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.424 --rc genhtml_branch_coverage=1 00:08:41.424 --rc genhtml_function_coverage=1 00:08:41.424 --rc genhtml_legend=1 00:08:41.424 --rc geninfo_all_blocks=1 00:08:41.424 --rc geninfo_unexecuted_blocks=1 00:08:41.424 00:08:41.424 ' 00:08:41.424 16:24:23 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:41.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.424 --rc genhtml_branch_coverage=1 00:08:41.424 --rc genhtml_function_coverage=1 00:08:41.424 --rc genhtml_legend=1 00:08:41.424 --rc geninfo_all_blocks=1 00:08:41.424 --rc geninfo_unexecuted_blocks=1 00:08:41.424 00:08:41.424 ' 00:08:41.424 16:24:23 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:08:41.424 16:24:23 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:08:41.424 16:24:23 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:08:41.424 16:24:23 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:08:41.424 16:24:23 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:08:41.424 16:24:23 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:08:41.424 16:24:23 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:08:41.424 16:24:23 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:41.424 16:24:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:41.683 16:24:23 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=70250 00:08:41.683 16:24:23 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:08:41.683 16:24:23 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 70250 00:08:41.683 16:24:23 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 70250 ']' 00:08:41.683 16:24:23 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.683 16:24:23 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:41.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.683 16:24:23 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.683 16:24:23 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:41.684 16:24:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:41.684 [2024-12-06 16:24:23.359022] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:08:41.684 [2024-12-06 16:24:23.359146] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70250 ] 00:08:41.943 [2024-12-06 16:24:23.532754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:41.944 [2024-12-06 16:24:23.562695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.944 [2024-12-06 16:24:23.562788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:42.514 16:24:24 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:42.514 16:24:24 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:08:42.514 16:24:24 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=70267 00:08:42.514 16:24:24 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:08:42.514 16:24:24 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:08:42.773 [ 00:08:42.773 "bdev_malloc_delete", 00:08:42.773 "bdev_malloc_create", 00:08:42.773 "bdev_null_resize", 00:08:42.773 "bdev_null_delete", 00:08:42.773 "bdev_null_create", 00:08:42.773 "bdev_nvme_cuse_unregister", 00:08:42.773 "bdev_nvme_cuse_register", 00:08:42.773 "bdev_opal_new_user", 00:08:42.773 "bdev_opal_set_lock_state", 00:08:42.773 "bdev_opal_delete", 00:08:42.773 "bdev_opal_get_info", 00:08:42.773 "bdev_opal_create", 00:08:42.773 "bdev_nvme_opal_revert", 00:08:42.773 "bdev_nvme_opal_init", 00:08:42.773 "bdev_nvme_send_cmd", 00:08:42.773 "bdev_nvme_set_keys", 00:08:42.773 "bdev_nvme_get_path_iostat", 00:08:42.773 "bdev_nvme_get_mdns_discovery_info", 00:08:42.773 "bdev_nvme_stop_mdns_discovery", 00:08:42.774 "bdev_nvme_start_mdns_discovery", 00:08:42.774 "bdev_nvme_set_multipath_policy", 00:08:42.774 "bdev_nvme_set_preferred_path", 00:08:42.774 "bdev_nvme_get_io_paths", 00:08:42.774 "bdev_nvme_remove_error_injection", 00:08:42.774 "bdev_nvme_add_error_injection", 00:08:42.774 "bdev_nvme_get_discovery_info", 00:08:42.774 "bdev_nvme_stop_discovery", 00:08:42.774 "bdev_nvme_start_discovery", 00:08:42.774 "bdev_nvme_get_controller_health_info", 00:08:42.774 "bdev_nvme_disable_controller", 00:08:42.774 "bdev_nvme_enable_controller", 00:08:42.774 "bdev_nvme_reset_controller", 00:08:42.774 "bdev_nvme_get_transport_statistics", 00:08:42.774 "bdev_nvme_apply_firmware", 00:08:42.774 "bdev_nvme_detach_controller", 00:08:42.774 "bdev_nvme_get_controllers", 00:08:42.774 "bdev_nvme_attach_controller", 00:08:42.774 "bdev_nvme_set_hotplug", 00:08:42.774 "bdev_nvme_set_options", 00:08:42.774 "bdev_passthru_delete", 00:08:42.774 "bdev_passthru_create", 00:08:42.774 "bdev_lvol_set_parent_bdev", 00:08:42.774 "bdev_lvol_set_parent", 00:08:42.774 "bdev_lvol_check_shallow_copy", 00:08:42.774 "bdev_lvol_start_shallow_copy", 00:08:42.774 "bdev_lvol_grow_lvstore", 00:08:42.774 "bdev_lvol_get_lvols", 00:08:42.774 "bdev_lvol_get_lvstores", 00:08:42.774 "bdev_lvol_delete", 00:08:42.774 "bdev_lvol_set_read_only", 00:08:42.774 "bdev_lvol_resize", 00:08:42.774 "bdev_lvol_decouple_parent", 00:08:42.774 "bdev_lvol_inflate", 00:08:42.774 "bdev_lvol_rename", 00:08:42.774 "bdev_lvol_clone_bdev", 00:08:42.774 "bdev_lvol_clone", 00:08:42.774 "bdev_lvol_snapshot", 00:08:42.774 "bdev_lvol_create", 00:08:42.774 "bdev_lvol_delete_lvstore", 00:08:42.774 "bdev_lvol_rename_lvstore", 00:08:42.774 "bdev_lvol_create_lvstore", 00:08:42.774 "bdev_raid_set_options", 00:08:42.774 "bdev_raid_remove_base_bdev", 00:08:42.774 "bdev_raid_add_base_bdev", 00:08:42.774 "bdev_raid_delete", 00:08:42.774 "bdev_raid_create", 00:08:42.774 "bdev_raid_get_bdevs", 00:08:42.774 "bdev_error_inject_error", 00:08:42.774 "bdev_error_delete", 00:08:42.774 "bdev_error_create", 00:08:42.774 "bdev_split_delete", 00:08:42.774 "bdev_split_create", 00:08:42.774 "bdev_delay_delete", 00:08:42.774 "bdev_delay_create", 00:08:42.774 "bdev_delay_update_latency", 00:08:42.774 "bdev_zone_block_delete", 00:08:42.774 "bdev_zone_block_create", 00:08:42.774 "blobfs_create", 00:08:42.774 "blobfs_detect", 00:08:42.774 "blobfs_set_cache_size", 00:08:42.774 "bdev_aio_delete", 00:08:42.774 "bdev_aio_rescan", 00:08:42.774 "bdev_aio_create", 00:08:42.774 "bdev_ftl_set_property", 00:08:42.774 "bdev_ftl_get_properties", 00:08:42.774 "bdev_ftl_get_stats", 00:08:42.774 "bdev_ftl_unmap", 00:08:42.774 "bdev_ftl_unload", 00:08:42.774 "bdev_ftl_delete", 00:08:42.774 "bdev_ftl_load", 00:08:42.774 "bdev_ftl_create", 00:08:42.774 "bdev_virtio_attach_controller", 00:08:42.774 "bdev_virtio_scsi_get_devices", 00:08:42.774 "bdev_virtio_detach_controller", 00:08:42.774 "bdev_virtio_blk_set_hotplug", 00:08:42.774 "bdev_iscsi_delete", 00:08:42.774 "bdev_iscsi_create", 00:08:42.774 "bdev_iscsi_set_options", 00:08:42.774 "accel_error_inject_error", 00:08:42.774 "ioat_scan_accel_module", 00:08:42.774 "dsa_scan_accel_module", 00:08:42.774 "iaa_scan_accel_module", 00:08:42.774 "keyring_file_remove_key", 00:08:42.774 "keyring_file_add_key", 00:08:42.774 "keyring_linux_set_options", 00:08:42.774 "fsdev_aio_delete", 00:08:42.774 "fsdev_aio_create", 00:08:42.774 "iscsi_get_histogram", 00:08:42.774 "iscsi_enable_histogram", 00:08:42.774 "iscsi_set_options", 00:08:42.774 "iscsi_get_auth_groups", 00:08:42.774 "iscsi_auth_group_remove_secret", 00:08:42.774 "iscsi_auth_group_add_secret", 00:08:42.774 "iscsi_delete_auth_group", 00:08:42.774 "iscsi_create_auth_group", 00:08:42.774 "iscsi_set_discovery_auth", 00:08:42.774 "iscsi_get_options", 00:08:42.774 "iscsi_target_node_request_logout", 00:08:42.774 "iscsi_target_node_set_redirect", 00:08:42.774 "iscsi_target_node_set_auth", 00:08:42.774 "iscsi_target_node_add_lun", 00:08:42.774 "iscsi_get_stats", 00:08:42.774 "iscsi_get_connections", 00:08:42.774 "iscsi_portal_group_set_auth", 00:08:42.774 "iscsi_start_portal_group", 00:08:42.774 "iscsi_delete_portal_group", 00:08:42.774 "iscsi_create_portal_group", 00:08:42.774 "iscsi_get_portal_groups", 00:08:42.774 "iscsi_delete_target_node", 00:08:42.774 "iscsi_target_node_remove_pg_ig_maps", 00:08:42.774 "iscsi_target_node_add_pg_ig_maps", 00:08:42.774 "iscsi_create_target_node", 00:08:42.774 "iscsi_get_target_nodes", 00:08:42.774 "iscsi_delete_initiator_group", 00:08:42.774 "iscsi_initiator_group_remove_initiators", 00:08:42.774 "iscsi_initiator_group_add_initiators", 00:08:42.774 "iscsi_create_initiator_group", 00:08:42.774 "iscsi_get_initiator_groups", 00:08:42.774 "nvmf_set_crdt", 00:08:42.774 "nvmf_set_config", 00:08:42.774 "nvmf_set_max_subsystems", 00:08:42.774 "nvmf_stop_mdns_prr", 00:08:42.774 "nvmf_publish_mdns_prr", 00:08:42.774 "nvmf_subsystem_get_listeners", 00:08:42.774 "nvmf_subsystem_get_qpairs", 00:08:42.774 "nvmf_subsystem_get_controllers", 00:08:42.774 "nvmf_get_stats", 00:08:42.774 "nvmf_get_transports", 00:08:42.774 "nvmf_create_transport", 00:08:42.774 "nvmf_get_targets", 00:08:42.774 "nvmf_delete_target", 00:08:42.774 "nvmf_create_target", 00:08:42.774 "nvmf_subsystem_allow_any_host", 00:08:42.774 "nvmf_subsystem_set_keys", 00:08:42.774 "nvmf_subsystem_remove_host", 00:08:42.774 "nvmf_subsystem_add_host", 00:08:42.774 "nvmf_ns_remove_host", 00:08:42.774 "nvmf_ns_add_host", 00:08:42.774 "nvmf_subsystem_remove_ns", 00:08:42.774 "nvmf_subsystem_set_ns_ana_group", 00:08:42.774 "nvmf_subsystem_add_ns", 00:08:42.774 "nvmf_subsystem_listener_set_ana_state", 00:08:42.774 "nvmf_discovery_get_referrals", 00:08:42.774 "nvmf_discovery_remove_referral", 00:08:42.774 "nvmf_discovery_add_referral", 00:08:42.774 "nvmf_subsystem_remove_listener", 00:08:42.774 "nvmf_subsystem_add_listener", 00:08:42.774 "nvmf_delete_subsystem", 00:08:42.774 "nvmf_create_subsystem", 00:08:42.774 "nvmf_get_subsystems", 00:08:42.774 "env_dpdk_get_mem_stats", 00:08:42.774 "nbd_get_disks", 00:08:42.774 "nbd_stop_disk", 00:08:42.774 "nbd_start_disk", 00:08:42.774 "ublk_recover_disk", 00:08:42.774 "ublk_get_disks", 00:08:42.774 "ublk_stop_disk", 00:08:42.774 "ublk_start_disk", 00:08:42.774 "ublk_destroy_target", 00:08:42.774 "ublk_create_target", 00:08:42.774 "virtio_blk_create_transport", 00:08:42.774 "virtio_blk_get_transports", 00:08:42.774 "vhost_controller_set_coalescing", 00:08:42.774 "vhost_get_controllers", 00:08:42.774 "vhost_delete_controller", 00:08:42.774 "vhost_create_blk_controller", 00:08:42.774 "vhost_scsi_controller_remove_target", 00:08:42.774 "vhost_scsi_controller_add_target", 00:08:42.774 "vhost_start_scsi_controller", 00:08:42.774 "vhost_create_scsi_controller", 00:08:42.774 "thread_set_cpumask", 00:08:42.774 "scheduler_set_options", 00:08:42.774 "framework_get_governor", 00:08:42.774 "framework_get_scheduler", 00:08:42.774 "framework_set_scheduler", 00:08:42.774 "framework_get_reactors", 00:08:42.774 "thread_get_io_channels", 00:08:42.774 "thread_get_pollers", 00:08:42.774 "thread_get_stats", 00:08:42.774 "framework_monitor_context_switch", 00:08:42.774 "spdk_kill_instance", 00:08:42.774 "log_enable_timestamps", 00:08:42.774 "log_get_flags", 00:08:42.774 "log_clear_flag", 00:08:42.774 "log_set_flag", 00:08:42.774 "log_get_level", 00:08:42.774 "log_set_level", 00:08:42.774 "log_get_print_level", 00:08:42.774 "log_set_print_level", 00:08:42.774 "framework_enable_cpumask_locks", 00:08:42.774 "framework_disable_cpumask_locks", 00:08:42.774 "framework_wait_init", 00:08:42.774 "framework_start_init", 00:08:42.774 "scsi_get_devices", 00:08:42.774 "bdev_get_histogram", 00:08:42.774 "bdev_enable_histogram", 00:08:42.774 "bdev_set_qos_limit", 00:08:42.774 "bdev_set_qd_sampling_period", 00:08:42.774 "bdev_get_bdevs", 00:08:42.774 "bdev_reset_iostat", 00:08:42.774 "bdev_get_iostat", 00:08:42.774 "bdev_examine", 00:08:42.774 "bdev_wait_for_examine", 00:08:42.774 "bdev_set_options", 00:08:42.774 "accel_get_stats", 00:08:42.774 "accel_set_options", 00:08:42.774 "accel_set_driver", 00:08:42.774 "accel_crypto_key_destroy", 00:08:42.774 "accel_crypto_keys_get", 00:08:42.774 "accel_crypto_key_create", 00:08:42.774 "accel_assign_opc", 00:08:42.774 "accel_get_module_info", 00:08:42.774 "accel_get_opc_assignments", 00:08:42.774 "vmd_rescan", 00:08:42.774 "vmd_remove_device", 00:08:42.774 "vmd_enable", 00:08:42.774 "sock_get_default_impl", 00:08:42.774 "sock_set_default_impl", 00:08:42.774 "sock_impl_set_options", 00:08:42.774 "sock_impl_get_options", 00:08:42.774 "iobuf_get_stats", 00:08:42.774 "iobuf_set_options", 00:08:42.774 "keyring_get_keys", 00:08:42.774 "framework_get_pci_devices", 00:08:42.774 "framework_get_config", 00:08:42.774 "framework_get_subsystems", 00:08:42.774 "fsdev_set_opts", 00:08:42.774 "fsdev_get_opts", 00:08:42.774 "trace_get_info", 00:08:42.774 "trace_get_tpoint_group_mask", 00:08:42.774 "trace_disable_tpoint_group", 00:08:42.774 "trace_enable_tpoint_group", 00:08:42.774 "trace_clear_tpoint_mask", 00:08:42.774 "trace_set_tpoint_mask", 00:08:42.774 "notify_get_notifications", 00:08:42.775 "notify_get_types", 00:08:42.775 "spdk_get_version", 00:08:42.775 "rpc_get_methods" 00:08:42.775 ] 00:08:42.775 16:24:24 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:08:42.775 16:24:24 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:42.775 16:24:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:42.775 16:24:24 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:42.775 16:24:24 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 70250 00:08:42.775 16:24:24 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 70250 ']' 00:08:42.775 16:24:24 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 70250 00:08:42.775 16:24:24 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:08:42.775 16:24:24 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:42.775 16:24:24 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70250 00:08:42.775 killing process with pid 70250 00:08:42.775 16:24:24 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:42.775 16:24:24 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:42.775 16:24:24 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70250' 00:08:42.775 16:24:24 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 70250 00:08:42.775 16:24:24 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 70250 00:08:43.342 00:08:43.342 real 0m1.895s 00:08:43.342 user 0m3.231s 00:08:43.342 sys 0m0.574s 00:08:43.343 16:24:24 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:43.343 16:24:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:43.343 ************************************ 00:08:43.343 END TEST spdkcli_tcp 00:08:43.343 ************************************ 00:08:43.343 16:24:24 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:43.343 16:24:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:43.343 16:24:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:43.343 16:24:24 -- common/autotest_common.sh@10 -- # set +x 00:08:43.343 ************************************ 00:08:43.343 START TEST dpdk_mem_utility 00:08:43.343 ************************************ 00:08:43.343 16:24:24 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:43.343 * Looking for test storage... 00:08:43.343 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:08:43.343 16:24:25 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:43.343 16:24:25 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:08:43.343 16:24:25 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:43.343 16:24:25 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:43.343 16:24:25 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:43.343 16:24:25 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:43.343 16:24:25 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:43.343 16:24:25 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:08:43.343 16:24:25 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:08:43.343 16:24:25 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:08:43.343 16:24:25 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:08:43.343 16:24:25 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:08:43.343 16:24:25 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:08:43.343 16:24:25 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:08:43.343 16:24:25 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:43.343 16:24:25 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:08:43.343 16:24:25 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:08:43.343 16:24:25 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:43.343 16:24:25 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:43.343 16:24:25 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:08:43.343 16:24:25 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:08:43.343 16:24:25 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:43.343 16:24:25 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:08:43.343 16:24:25 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:08:43.343 16:24:25 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:08:43.601 16:24:25 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:08:43.601 16:24:25 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:43.601 16:24:25 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:08:43.601 16:24:25 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:08:43.601 16:24:25 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:43.601 16:24:25 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:43.601 16:24:25 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:08:43.601 16:24:25 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:43.601 16:24:25 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:43.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.601 --rc genhtml_branch_coverage=1 00:08:43.601 --rc genhtml_function_coverage=1 00:08:43.601 --rc genhtml_legend=1 00:08:43.601 --rc geninfo_all_blocks=1 00:08:43.601 --rc geninfo_unexecuted_blocks=1 00:08:43.601 00:08:43.601 ' 00:08:43.601 16:24:25 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:43.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.601 --rc genhtml_branch_coverage=1 00:08:43.601 --rc genhtml_function_coverage=1 00:08:43.601 --rc genhtml_legend=1 00:08:43.601 --rc geninfo_all_blocks=1 00:08:43.601 --rc geninfo_unexecuted_blocks=1 00:08:43.601 00:08:43.601 ' 00:08:43.601 16:24:25 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:43.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.601 --rc genhtml_branch_coverage=1 00:08:43.601 --rc genhtml_function_coverage=1 00:08:43.601 --rc genhtml_legend=1 00:08:43.601 --rc geninfo_all_blocks=1 00:08:43.601 --rc geninfo_unexecuted_blocks=1 00:08:43.601 00:08:43.601 ' 00:08:43.601 16:24:25 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:43.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.601 --rc genhtml_branch_coverage=1 00:08:43.601 --rc genhtml_function_coverage=1 00:08:43.601 --rc genhtml_legend=1 00:08:43.601 --rc geninfo_all_blocks=1 00:08:43.601 --rc geninfo_unexecuted_blocks=1 00:08:43.601 00:08:43.601 ' 00:08:43.601 16:24:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:43.601 16:24:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=70350 00:08:43.601 16:24:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:43.601 16:24:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 70350 00:08:43.601 16:24:25 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 70350 ']' 00:08:43.601 16:24:25 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.601 16:24:25 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:43.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.601 16:24:25 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.601 16:24:25 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:43.601 16:24:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:43.601 [2024-12-06 16:24:25.286734] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:08:43.601 [2024-12-06 16:24:25.286877] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70350 ] 00:08:43.858 [2024-12-06 16:24:25.457839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.858 [2024-12-06 16:24:25.486453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.426 16:24:26 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:44.426 16:24:26 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:08:44.426 16:24:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:44.426 16:24:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:44.426 16:24:26 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.426 16:24:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:44.426 { 00:08:44.426 "filename": "/tmp/spdk_mem_dump.txt" 00:08:44.426 } 00:08:44.426 16:24:26 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.426 16:24:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:44.426 DPDK memory size 818.000000 MiB in 1 heap(s) 00:08:44.426 1 heaps totaling size 818.000000 MiB 00:08:44.426 size: 818.000000 MiB heap id: 0 00:08:44.426 end heaps---------- 00:08:44.426 9 mempools totaling size 603.782043 MiB 00:08:44.426 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:44.426 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:44.426 size: 100.555481 MiB name: bdev_io_70350 00:08:44.426 size: 50.003479 MiB name: msgpool_70350 00:08:44.426 size: 36.509338 MiB name: fsdev_io_70350 00:08:44.426 size: 21.763794 MiB name: PDU_Pool 00:08:44.426 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:44.426 size: 4.133484 MiB name: evtpool_70350 00:08:44.426 size: 0.026123 MiB name: Session_Pool 00:08:44.426 end mempools------- 00:08:44.426 6 memzones totaling size 4.142822 MiB 00:08:44.426 size: 1.000366 MiB name: RG_ring_0_70350 00:08:44.426 size: 1.000366 MiB name: RG_ring_1_70350 00:08:44.426 size: 1.000366 MiB name: RG_ring_4_70350 00:08:44.426 size: 1.000366 MiB name: RG_ring_5_70350 00:08:44.426 size: 0.125366 MiB name: RG_ring_2_70350 00:08:44.426 size: 0.015991 MiB name: RG_ring_3_70350 00:08:44.426 end memzones------- 00:08:44.426 16:24:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:08:44.426 heap id: 0 total size: 818.000000 MiB number of busy elements: 320 number of free elements: 15 00:08:44.426 list of free elements. size: 10.801941 MiB 00:08:44.426 element at address: 0x200019200000 with size: 0.999878 MiB 00:08:44.426 element at address: 0x200019400000 with size: 0.999878 MiB 00:08:44.426 element at address: 0x200032000000 with size: 0.994446 MiB 00:08:44.426 element at address: 0x200000400000 with size: 0.993958 MiB 00:08:44.426 element at address: 0x200006400000 with size: 0.959839 MiB 00:08:44.426 element at address: 0x200012c00000 with size: 0.944275 MiB 00:08:44.426 element at address: 0x200019600000 with size: 0.936584 MiB 00:08:44.426 element at address: 0x200000200000 with size: 0.717346 MiB 00:08:44.426 element at address: 0x20001ae00000 with size: 0.566956 MiB 00:08:44.426 element at address: 0x20000a600000 with size: 0.488892 MiB 00:08:44.426 element at address: 0x200000c00000 with size: 0.486267 MiB 00:08:44.426 element at address: 0x200019800000 with size: 0.485657 MiB 00:08:44.426 element at address: 0x200003e00000 with size: 0.480286 MiB 00:08:44.426 element at address: 0x200028200000 with size: 0.395935 MiB 00:08:44.426 element at address: 0x200000800000 with size: 0.351746 MiB 00:08:44.427 list of standard malloc elements. size: 199.269165 MiB 00:08:44.427 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:08:44.427 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:08:44.427 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:08:44.427 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:08:44.427 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:08:44.427 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:08:44.427 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:08:44.427 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:08:44.427 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:08:44.427 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:08:44.427 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:08:44.427 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:08:44.427 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:08:44.427 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:08:44.427 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:08:44.427 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:08:44.427 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:08:44.427 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:08:44.427 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:08:44.427 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:08:44.427 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:08:44.427 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:08:44.427 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:08:44.427 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:08:44.427 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:08:44.427 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:08:44.427 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:08:44.427 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:08:44.427 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:08:44.427 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:08:44.427 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:08:44.427 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:08:44.427 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:08:44.427 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:08:44.427 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:08:44.427 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:08:44.427 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:08:44.427 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:08:44.427 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:08:44.427 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:08:44.427 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:08:44.427 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:08:44.427 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:08:44.427 element at address: 0x20000085e580 with size: 0.000183 MiB 00:08:44.427 element at address: 0x20000087e840 with size: 0.000183 MiB 00:08:44.427 element at address: 0x20000087e900 with size: 0.000183 MiB 00:08:44.427 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:08:44.427 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:08:44.427 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:08:44.427 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:08:44.427 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:08:44.427 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:08:44.427 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:08:44.427 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:08:44.427 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:08:44.427 element at address: 0x20000087f080 with size: 0.000183 MiB 00:08:44.427 element at address: 0x20000087f140 with size: 0.000183 MiB 00:08:44.427 element at address: 0x20000087f200 with size: 0.000183 MiB 00:08:44.427 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:08:44.427 element at address: 0x20000087f380 with size: 0.000183 MiB 00:08:44.427 element at address: 0x20000087f440 with size: 0.000183 MiB 00:08:44.427 element at address: 0x20000087f500 with size: 0.000183 MiB 00:08:44.427 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:08:44.427 element at address: 0x20000087f680 with size: 0.000183 MiB 00:08:44.427 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:08:44.427 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000cff000 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:08:44.427 element at address: 0x200003efb980 with size: 0.000183 MiB 00:08:44.427 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:08:44.427 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:08:44.427 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:08:44.427 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:08:44.427 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:08:44.427 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:08:44.428 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:08:44.428 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:08:44.428 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:08:44.428 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae91240 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae91300 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae913c0 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae91480 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae91540 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae91600 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae916c0 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae91780 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:08:44.428 element at address: 0x2000282655c0 with size: 0.000183 MiB 00:08:44.428 element at address: 0x200028265680 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20002826c280 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20002826c480 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20002826c540 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20002826c600 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20002826c780 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20002826c840 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20002826c900 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20002826d080 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20002826d140 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20002826d200 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20002826d380 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20002826d440 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20002826d500 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20002826d680 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20002826d740 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20002826d800 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20002826d980 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20002826da40 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20002826db00 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20002826de00 with size: 0.000183 MiB 00:08:44.428 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:08:44.429 element at address: 0x20002826df80 with size: 0.000183 MiB 00:08:44.429 element at address: 0x20002826e040 with size: 0.000183 MiB 00:08:44.429 element at address: 0x20002826e100 with size: 0.000183 MiB 00:08:44.429 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:08:44.429 element at address: 0x20002826e280 with size: 0.000183 MiB 00:08:44.429 element at address: 0x20002826e340 with size: 0.000183 MiB 00:08:44.429 element at address: 0x20002826e400 with size: 0.000183 MiB 00:08:44.429 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:08:44.429 element at address: 0x20002826e580 with size: 0.000183 MiB 00:08:44.429 element at address: 0x20002826e640 with size: 0.000183 MiB 00:08:44.429 element at address: 0x20002826e700 with size: 0.000183 MiB 00:08:44.429 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:08:44.429 element at address: 0x20002826e880 with size: 0.000183 MiB 00:08:44.429 element at address: 0x20002826e940 with size: 0.000183 MiB 00:08:44.429 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:08:44.429 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:08:44.429 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:08:44.429 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:08:44.429 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:08:44.429 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:08:44.429 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:08:44.429 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:08:44.429 element at address: 0x20002826f000 with size: 0.000183 MiB 00:08:44.429 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:08:44.429 element at address: 0x20002826f180 with size: 0.000183 MiB 00:08:44.429 element at address: 0x20002826f240 with size: 0.000183 MiB 00:08:44.429 element at address: 0x20002826f300 with size: 0.000183 MiB 00:08:44.429 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:08:44.429 element at address: 0x20002826f480 with size: 0.000183 MiB 00:08:44.429 element at address: 0x20002826f540 with size: 0.000183 MiB 00:08:44.429 element at address: 0x20002826f600 with size: 0.000183 MiB 00:08:44.429 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:08:44.429 element at address: 0x20002826f780 with size: 0.000183 MiB 00:08:44.429 element at address: 0x20002826f840 with size: 0.000183 MiB 00:08:44.429 element at address: 0x20002826f900 with size: 0.000183 MiB 00:08:44.429 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:08:44.429 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:08:44.429 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:08:44.429 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:08:44.429 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:08:44.429 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:08:44.429 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:08:44.429 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:08:44.429 list of memzone associated elements. size: 607.928894 MiB 00:08:44.429 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:08:44.429 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:44.429 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:08:44.429 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:44.429 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:08:44.429 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_70350_0 00:08:44.429 element at address: 0x200000dff380 with size: 48.003052 MiB 00:08:44.429 associated memzone info: size: 48.002930 MiB name: MP_msgpool_70350_0 00:08:44.429 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:08:44.429 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_70350_0 00:08:44.429 element at address: 0x2000199be940 with size: 20.255554 MiB 00:08:44.429 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:44.429 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:08:44.429 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:44.429 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:08:44.429 associated memzone info: size: 3.000122 MiB name: MP_evtpool_70350_0 00:08:44.429 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:08:44.429 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_70350 00:08:44.429 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:08:44.429 associated memzone info: size: 1.007996 MiB name: MP_evtpool_70350 00:08:44.429 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:08:44.429 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:44.429 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:08:44.429 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:44.429 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:08:44.429 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:44.429 element at address: 0x200003efba40 with size: 1.008118 MiB 00:08:44.429 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:44.429 element at address: 0x200000cff180 with size: 1.000488 MiB 00:08:44.429 associated memzone info: size: 1.000366 MiB name: RG_ring_0_70350 00:08:44.429 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:08:44.429 associated memzone info: size: 1.000366 MiB name: RG_ring_1_70350 00:08:44.429 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:08:44.429 associated memzone info: size: 1.000366 MiB name: RG_ring_4_70350 00:08:44.429 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:08:44.429 associated memzone info: size: 1.000366 MiB name: RG_ring_5_70350 00:08:44.429 element at address: 0x20000087f740 with size: 0.500488 MiB 00:08:44.429 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_70350 00:08:44.429 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:08:44.429 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_70350 00:08:44.429 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:08:44.429 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:44.429 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:08:44.429 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:44.429 element at address: 0x20001987c540 with size: 0.250488 MiB 00:08:44.429 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:44.429 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:08:44.429 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_70350 00:08:44.429 element at address: 0x20000085e640 with size: 0.125488 MiB 00:08:44.429 associated memzone info: size: 0.125366 MiB name: RG_ring_2_70350 00:08:44.429 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:08:44.429 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:44.429 element at address: 0x200028265740 with size: 0.023743 MiB 00:08:44.429 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:44.429 element at address: 0x20000085a380 with size: 0.016113 MiB 00:08:44.429 associated memzone info: size: 0.015991 MiB name: RG_ring_3_70350 00:08:44.429 element at address: 0x20002826b880 with size: 0.002441 MiB 00:08:44.429 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:44.429 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:08:44.429 associated memzone info: size: 0.000183 MiB name: MP_msgpool_70350 00:08:44.429 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:08:44.429 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_70350 00:08:44.429 element at address: 0x20000085a180 with size: 0.000305 MiB 00:08:44.429 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_70350 00:08:44.429 element at address: 0x20002826c340 with size: 0.000305 MiB 00:08:44.429 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:44.429 16:24:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:44.429 16:24:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 70350 00:08:44.429 16:24:26 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 70350 ']' 00:08:44.429 16:24:26 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 70350 00:08:44.429 16:24:26 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:08:44.429 16:24:26 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:44.429 16:24:26 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70350 00:08:44.689 16:24:26 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:44.689 16:24:26 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:44.689 killing process with pid 70350 00:08:44.689 16:24:26 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70350' 00:08:44.689 16:24:26 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 70350 00:08:44.689 16:24:26 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 70350 00:08:44.949 00:08:44.949 real 0m1.677s 00:08:44.949 user 0m1.678s 00:08:44.949 sys 0m0.478s 00:08:44.949 16:24:26 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:44.949 16:24:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:44.949 ************************************ 00:08:44.949 END TEST dpdk_mem_utility 00:08:44.949 ************************************ 00:08:44.949 16:24:26 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:44.949 16:24:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:44.949 16:24:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.949 16:24:26 -- common/autotest_common.sh@10 -- # set +x 00:08:44.949 ************************************ 00:08:44.949 START TEST event 00:08:44.949 ************************************ 00:08:44.949 16:24:26 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:45.210 * Looking for test storage... 00:08:45.210 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:45.210 16:24:26 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:45.210 16:24:26 event -- common/autotest_common.sh@1711 -- # lcov --version 00:08:45.210 16:24:26 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:45.210 16:24:26 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:45.210 16:24:26 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:45.210 16:24:26 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:45.210 16:24:26 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:45.210 16:24:26 event -- scripts/common.sh@336 -- # IFS=.-: 00:08:45.210 16:24:26 event -- scripts/common.sh@336 -- # read -ra ver1 00:08:45.210 16:24:26 event -- scripts/common.sh@337 -- # IFS=.-: 00:08:45.210 16:24:26 event -- scripts/common.sh@337 -- # read -ra ver2 00:08:45.210 16:24:26 event -- scripts/common.sh@338 -- # local 'op=<' 00:08:45.210 16:24:26 event -- scripts/common.sh@340 -- # ver1_l=2 00:08:45.210 16:24:26 event -- scripts/common.sh@341 -- # ver2_l=1 00:08:45.210 16:24:26 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:45.210 16:24:26 event -- scripts/common.sh@344 -- # case "$op" in 00:08:45.210 16:24:26 event -- scripts/common.sh@345 -- # : 1 00:08:45.210 16:24:26 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:45.210 16:24:26 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:45.210 16:24:26 event -- scripts/common.sh@365 -- # decimal 1 00:08:45.210 16:24:26 event -- scripts/common.sh@353 -- # local d=1 00:08:45.210 16:24:26 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:45.210 16:24:26 event -- scripts/common.sh@355 -- # echo 1 00:08:45.210 16:24:26 event -- scripts/common.sh@365 -- # ver1[v]=1 00:08:45.210 16:24:26 event -- scripts/common.sh@366 -- # decimal 2 00:08:45.210 16:24:26 event -- scripts/common.sh@353 -- # local d=2 00:08:45.210 16:24:26 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:45.210 16:24:26 event -- scripts/common.sh@355 -- # echo 2 00:08:45.210 16:24:26 event -- scripts/common.sh@366 -- # ver2[v]=2 00:08:45.210 16:24:26 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:45.210 16:24:26 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:45.210 16:24:26 event -- scripts/common.sh@368 -- # return 0 00:08:45.210 16:24:26 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:45.210 16:24:26 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:45.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.210 --rc genhtml_branch_coverage=1 00:08:45.210 --rc genhtml_function_coverage=1 00:08:45.210 --rc genhtml_legend=1 00:08:45.210 --rc geninfo_all_blocks=1 00:08:45.210 --rc geninfo_unexecuted_blocks=1 00:08:45.210 00:08:45.210 ' 00:08:45.210 16:24:26 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:45.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.210 --rc genhtml_branch_coverage=1 00:08:45.210 --rc genhtml_function_coverage=1 00:08:45.210 --rc genhtml_legend=1 00:08:45.210 --rc geninfo_all_blocks=1 00:08:45.210 --rc geninfo_unexecuted_blocks=1 00:08:45.210 00:08:45.210 ' 00:08:45.210 16:24:26 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:45.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.210 --rc genhtml_branch_coverage=1 00:08:45.210 --rc genhtml_function_coverage=1 00:08:45.210 --rc genhtml_legend=1 00:08:45.210 --rc geninfo_all_blocks=1 00:08:45.210 --rc geninfo_unexecuted_blocks=1 00:08:45.210 00:08:45.210 ' 00:08:45.210 16:24:26 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:45.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.210 --rc genhtml_branch_coverage=1 00:08:45.210 --rc genhtml_function_coverage=1 00:08:45.210 --rc genhtml_legend=1 00:08:45.210 --rc geninfo_all_blocks=1 00:08:45.210 --rc geninfo_unexecuted_blocks=1 00:08:45.210 00:08:45.210 ' 00:08:45.210 16:24:26 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:45.210 16:24:26 event -- bdev/nbd_common.sh@6 -- # set -e 00:08:45.210 16:24:26 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:45.210 16:24:26 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:45.210 16:24:26 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.210 16:24:26 event -- common/autotest_common.sh@10 -- # set +x 00:08:45.210 ************************************ 00:08:45.210 START TEST event_perf 00:08:45.210 ************************************ 00:08:45.210 16:24:26 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:45.210 Running I/O for 1 seconds...[2024-12-06 16:24:26.990669] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:08:45.210 [2024-12-06 16:24:26.990860] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70436 ] 00:08:45.470 [2024-12-06 16:24:27.166351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:45.470 Running I/O for 1 seconds...[2024-12-06 16:24:27.198961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.470 [2024-12-06 16:24:27.199184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.470 [2024-12-06 16:24:27.199193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:45.470 [2024-12-06 16:24:27.199339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:46.409 00:08:46.409 lcore 0: 201506 00:08:46.409 lcore 1: 201505 00:08:46.409 lcore 2: 201505 00:08:46.409 lcore 3: 201506 00:08:46.669 done. 00:08:46.669 00:08:46.669 real 0m1.322s 00:08:46.669 user 0m4.092s 00:08:46.669 sys 0m0.110s 00:08:46.669 16:24:28 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.669 16:24:28 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:08:46.669 ************************************ 00:08:46.669 END TEST event_perf 00:08:46.669 ************************************ 00:08:46.669 16:24:28 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:46.669 16:24:28 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:46.669 16:24:28 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.669 16:24:28 event -- common/autotest_common.sh@10 -- # set +x 00:08:46.669 ************************************ 00:08:46.669 START TEST event_reactor 00:08:46.669 ************************************ 00:08:46.669 16:24:28 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:46.669 [2024-12-06 16:24:28.374214] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:08:46.669 [2024-12-06 16:24:28.374348] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70470 ] 00:08:46.929 [2024-12-06 16:24:28.545361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.929 [2024-12-06 16:24:28.570713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.866 test_start 00:08:47.866 oneshot 00:08:47.866 tick 100 00:08:47.866 tick 100 00:08:47.866 tick 250 00:08:47.866 tick 100 00:08:47.866 tick 100 00:08:47.866 tick 100 00:08:47.866 tick 250 00:08:47.866 tick 500 00:08:47.866 tick 100 00:08:47.866 tick 100 00:08:47.866 tick 250 00:08:47.866 tick 100 00:08:47.866 tick 100 00:08:47.866 test_end 00:08:47.866 00:08:47.866 real 0m1.294s 00:08:47.866 user 0m1.103s 00:08:47.866 sys 0m0.085s 00:08:47.866 16:24:29 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.866 16:24:29 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:08:47.866 ************************************ 00:08:47.866 END TEST event_reactor 00:08:47.866 ************************************ 00:08:47.866 16:24:29 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:47.866 16:24:29 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:47.866 16:24:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.866 16:24:29 event -- common/autotest_common.sh@10 -- # set +x 00:08:47.866 ************************************ 00:08:47.866 START TEST event_reactor_perf 00:08:47.866 ************************************ 00:08:47.866 16:24:29 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:48.125 [2024-12-06 16:24:29.730864] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:08:48.125 [2024-12-06 16:24:29.730990] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70507 ] 00:08:48.125 [2024-12-06 16:24:29.904383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.125 [2024-12-06 16:24:29.930718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.504 test_start 00:08:49.504 test_end 00:08:49.504 Performance: 364578 events per second 00:08:49.504 00:08:49.504 real 0m1.302s 00:08:49.504 user 0m1.116s 00:08:49.504 sys 0m0.079s 00:08:49.504 16:24:30 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.504 16:24:30 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:08:49.504 ************************************ 00:08:49.504 END TEST event_reactor_perf 00:08:49.504 ************************************ 00:08:49.504 16:24:31 event -- event/event.sh@49 -- # uname -s 00:08:49.504 16:24:31 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:49.504 16:24:31 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:49.504 16:24:31 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:49.504 16:24:31 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:49.504 16:24:31 event -- common/autotest_common.sh@10 -- # set +x 00:08:49.504 ************************************ 00:08:49.504 START TEST event_scheduler 00:08:49.504 ************************************ 00:08:49.504 16:24:31 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:49.504 * Looking for test storage... 00:08:49.504 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:08:49.504 16:24:31 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:49.504 16:24:31 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:08:49.504 16:24:31 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:49.504 16:24:31 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:49.504 16:24:31 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:49.504 16:24:31 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:49.504 16:24:31 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:49.504 16:24:31 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:08:49.504 16:24:31 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:08:49.504 16:24:31 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:08:49.504 16:24:31 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:08:49.504 16:24:31 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:08:49.504 16:24:31 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:08:49.504 16:24:31 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:08:49.504 16:24:31 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:49.504 16:24:31 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:08:49.504 16:24:31 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:08:49.504 16:24:31 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:49.504 16:24:31 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:49.504 16:24:31 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:08:49.504 16:24:31 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:08:49.504 16:24:31 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:49.504 16:24:31 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:08:49.504 16:24:31 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:08:49.504 16:24:31 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:08:49.504 16:24:31 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:08:49.504 16:24:31 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:49.504 16:24:31 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:08:49.504 16:24:31 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:08:49.504 16:24:31 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:49.504 16:24:31 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:49.504 16:24:31 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:08:49.504 16:24:31 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:49.504 16:24:31 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:49.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.504 --rc genhtml_branch_coverage=1 00:08:49.504 --rc genhtml_function_coverage=1 00:08:49.504 --rc genhtml_legend=1 00:08:49.504 --rc geninfo_all_blocks=1 00:08:49.504 --rc geninfo_unexecuted_blocks=1 00:08:49.504 00:08:49.504 ' 00:08:49.504 16:24:31 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:49.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.504 --rc genhtml_branch_coverage=1 00:08:49.504 --rc genhtml_function_coverage=1 00:08:49.504 --rc genhtml_legend=1 00:08:49.504 --rc geninfo_all_blocks=1 00:08:49.504 --rc geninfo_unexecuted_blocks=1 00:08:49.504 00:08:49.504 ' 00:08:49.504 16:24:31 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:49.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.504 --rc genhtml_branch_coverage=1 00:08:49.504 --rc genhtml_function_coverage=1 00:08:49.504 --rc genhtml_legend=1 00:08:49.504 --rc geninfo_all_blocks=1 00:08:49.504 --rc geninfo_unexecuted_blocks=1 00:08:49.504 00:08:49.504 ' 00:08:49.504 16:24:31 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:49.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.504 --rc genhtml_branch_coverage=1 00:08:49.504 --rc genhtml_function_coverage=1 00:08:49.504 --rc genhtml_legend=1 00:08:49.504 --rc geninfo_all_blocks=1 00:08:49.505 --rc geninfo_unexecuted_blocks=1 00:08:49.505 00:08:49.505 ' 00:08:49.505 16:24:31 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:49.505 16:24:31 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70577 00:08:49.505 16:24:31 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:49.505 16:24:31 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:49.505 16:24:31 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70577 00:08:49.505 16:24:31 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 70577 ']' 00:08:49.505 16:24:31 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.505 16:24:31 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:49.505 16:24:31 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.505 16:24:31 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:49.505 16:24:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:49.764 [2024-12-06 16:24:31.350870] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:08:49.764 [2024-12-06 16:24:31.351004] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70577 ] 00:08:49.764 [2024-12-06 16:24:31.526198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:49.764 [2024-12-06 16:24:31.556971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.764 [2024-12-06 16:24:31.557163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.764 [2024-12-06 16:24:31.557290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:49.764 [2024-12-06 16:24:31.557449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:50.702 16:24:32 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:50.702 16:24:32 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:08:50.702 16:24:32 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:50.702 16:24:32 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.702 16:24:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:50.702 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:50.702 POWER: Cannot set governor of lcore 0 to userspace 00:08:50.702 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:50.702 POWER: Cannot set governor of lcore 0 to performance 00:08:50.702 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:50.702 POWER: Cannot set governor of lcore 0 to userspace 00:08:50.702 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:50.702 POWER: Cannot set governor of lcore 0 to userspace 00:08:50.702 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:08:50.702 POWER: Unable to set Power Management Environment for lcore 0 00:08:50.702 [2024-12-06 16:24:32.241719] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:08:50.702 [2024-12-06 16:24:32.241752] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:08:50.702 [2024-12-06 16:24:32.241775] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:08:50.702 [2024-12-06 16:24:32.241800] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:50.702 [2024-12-06 16:24:32.241817] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:50.702 [2024-12-06 16:24:32.241853] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:50.702 16:24:32 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.702 16:24:32 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:50.702 16:24:32 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.702 16:24:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:50.702 [2024-12-06 16:24:32.313845] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:50.702 16:24:32 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.702 16:24:32 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:50.702 16:24:32 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:50.702 16:24:32 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.702 16:24:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:50.702 ************************************ 00:08:50.702 START TEST scheduler_create_thread 00:08:50.702 ************************************ 00:08:50.702 16:24:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:08:50.702 16:24:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:50.702 16:24:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.702 16:24:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:50.702 2 00:08:50.702 16:24:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.702 16:24:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:50.702 16:24:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.702 16:24:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:50.702 3 00:08:50.702 16:24:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.702 16:24:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:50.702 16:24:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.702 16:24:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:50.702 4 00:08:50.702 16:24:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.702 16:24:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:50.702 16:24:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.702 16:24:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:50.702 5 00:08:50.702 16:24:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.702 16:24:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:50.702 16:24:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.702 16:24:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:50.702 6 00:08:50.702 16:24:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.702 16:24:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:50.702 16:24:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.702 16:24:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:50.702 7 00:08:50.702 16:24:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.702 16:24:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:50.702 16:24:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.702 16:24:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:50.702 8 00:08:50.702 16:24:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.702 16:24:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:50.702 16:24:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.702 16:24:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:50.702 9 00:08:50.702 16:24:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.702 16:24:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:50.702 16:24:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.702 16:24:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:51.268 10 00:08:51.268 16:24:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.268 16:24:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:51.268 16:24:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.268 16:24:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:52.644 16:24:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.644 16:24:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:52.644 16:24:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:52.644 16:24:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.644 16:24:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:53.211 16:24:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.211 16:24:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:53.211 16:24:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.211 16:24:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:54.150 16:24:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.150 16:24:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:54.150 16:24:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:54.150 16:24:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.150 16:24:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:54.719 16:24:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.719 00:08:54.719 real 0m4.211s 00:08:54.719 user 0m0.028s 00:08:54.719 sys 0m0.006s 00:08:54.719 16:24:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:54.719 16:24:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:54.719 ************************************ 00:08:54.719 END TEST scheduler_create_thread 00:08:54.719 ************************************ 00:08:54.979 16:24:36 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:54.979 16:24:36 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70577 00:08:54.979 16:24:36 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 70577 ']' 00:08:54.979 16:24:36 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 70577 00:08:54.979 16:24:36 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:08:54.979 16:24:36 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:54.979 16:24:36 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70577 00:08:54.979 16:24:36 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:08:54.979 16:24:36 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:08:54.979 killing process with pid 70577 00:08:54.979 16:24:36 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70577' 00:08:54.979 16:24:36 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 70577 00:08:54.979 16:24:36 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 70577 00:08:55.239 [2024-12-06 16:24:36.818277] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:55.498 00:08:55.498 real 0m6.047s 00:08:55.498 user 0m13.187s 00:08:55.498 sys 0m0.453s 00:08:55.498 16:24:37 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:55.498 16:24:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:55.498 ************************************ 00:08:55.498 END TEST event_scheduler 00:08:55.498 ************************************ 00:08:55.498 16:24:37 event -- event/event.sh@51 -- # modprobe -n nbd 00:08:55.498 16:24:37 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:55.498 16:24:37 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:55.498 16:24:37 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.498 16:24:37 event -- common/autotest_common.sh@10 -- # set +x 00:08:55.498 ************************************ 00:08:55.498 START TEST app_repeat 00:08:55.498 ************************************ 00:08:55.498 16:24:37 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:08:55.498 16:24:37 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:55.498 16:24:37 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:55.498 16:24:37 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:08:55.498 16:24:37 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:55.498 16:24:37 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:08:55.498 16:24:37 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:08:55.498 16:24:37 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:08:55.498 16:24:37 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70696 00:08:55.498 16:24:37 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:55.498 16:24:37 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:55.498 16:24:37 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70696' 00:08:55.498 Process app_repeat pid: 70696 00:08:55.498 16:24:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:55.498 16:24:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:55.498 spdk_app_start Round 0 00:08:55.498 16:24:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70696 /var/tmp/spdk-nbd.sock 00:08:55.498 16:24:37 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70696 ']' 00:08:55.498 16:24:37 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:55.498 16:24:37 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:55.498 16:24:37 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:55.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:55.498 16:24:37 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:55.498 16:24:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:55.499 [2024-12-06 16:24:37.211105] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:08:55.499 [2024-12-06 16:24:37.211261] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70696 ] 00:08:55.757 [2024-12-06 16:24:37.383833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:55.757 [2024-12-06 16:24:37.413858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.757 [2024-12-06 16:24:37.413957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:56.326 16:24:38 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:56.326 16:24:38 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:56.327 16:24:38 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:56.602 Malloc0 00:08:56.602 16:24:38 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:56.861 Malloc1 00:08:56.861 16:24:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:56.861 16:24:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:56.861 16:24:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:56.861 16:24:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:56.861 16:24:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:56.861 16:24:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:56.861 16:24:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:56.861 16:24:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:56.861 16:24:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:56.861 16:24:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:56.861 16:24:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:56.861 16:24:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:56.861 16:24:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:56.861 16:24:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:56.861 16:24:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:56.861 16:24:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:57.121 /dev/nbd0 00:08:57.121 16:24:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:57.121 16:24:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:57.121 16:24:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:57.121 16:24:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:57.121 16:24:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:57.121 16:24:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:57.121 16:24:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:57.121 16:24:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:57.121 16:24:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:57.121 16:24:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:57.121 16:24:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:57.121 1+0 records in 00:08:57.121 1+0 records out 00:08:57.121 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000416264 s, 9.8 MB/s 00:08:57.121 16:24:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:57.121 16:24:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:57.121 16:24:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:57.121 16:24:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:57.121 16:24:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:57.121 16:24:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:57.121 16:24:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:57.121 16:24:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:57.381 /dev/nbd1 00:08:57.381 16:24:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:57.381 16:24:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:57.381 16:24:39 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:57.381 16:24:39 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:57.381 16:24:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:57.381 16:24:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:57.381 16:24:39 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:57.381 16:24:39 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:57.381 16:24:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:57.381 16:24:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:57.381 16:24:39 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:57.381 1+0 records in 00:08:57.381 1+0 records out 00:08:57.381 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000380579 s, 10.8 MB/s 00:08:57.381 16:24:39 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:57.381 16:24:39 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:57.381 16:24:39 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:57.381 16:24:39 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:57.381 16:24:39 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:57.381 16:24:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:57.381 16:24:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:57.381 16:24:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:57.381 16:24:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:57.381 16:24:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:57.642 16:24:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:57.642 { 00:08:57.642 "nbd_device": "/dev/nbd0", 00:08:57.642 "bdev_name": "Malloc0" 00:08:57.642 }, 00:08:57.642 { 00:08:57.642 "nbd_device": "/dev/nbd1", 00:08:57.642 "bdev_name": "Malloc1" 00:08:57.642 } 00:08:57.642 ]' 00:08:57.642 16:24:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:57.642 16:24:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:57.642 { 00:08:57.642 "nbd_device": "/dev/nbd0", 00:08:57.642 "bdev_name": "Malloc0" 00:08:57.642 }, 00:08:57.642 { 00:08:57.642 "nbd_device": "/dev/nbd1", 00:08:57.642 "bdev_name": "Malloc1" 00:08:57.642 } 00:08:57.642 ]' 00:08:57.642 16:24:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:57.642 /dev/nbd1' 00:08:57.642 16:24:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:57.642 /dev/nbd1' 00:08:57.642 16:24:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:57.642 16:24:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:57.642 16:24:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:57.642 16:24:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:57.642 16:24:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:57.642 16:24:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:57.642 16:24:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:57.642 16:24:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:57.642 16:24:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:57.642 16:24:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:57.642 16:24:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:57.642 16:24:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:57.642 256+0 records in 00:08:57.642 256+0 records out 00:08:57.642 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126691 s, 82.8 MB/s 00:08:57.642 16:24:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:57.642 16:24:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:57.642 256+0 records in 00:08:57.642 256+0 records out 00:08:57.642 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0223184 s, 47.0 MB/s 00:08:57.642 16:24:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:57.642 16:24:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:57.902 256+0 records in 00:08:57.902 256+0 records out 00:08:57.902 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0272903 s, 38.4 MB/s 00:08:57.902 16:24:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:57.902 16:24:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:57.902 16:24:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:57.902 16:24:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:57.902 16:24:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:57.902 16:24:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:57.902 16:24:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:57.902 16:24:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:57.902 16:24:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:57.902 16:24:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:57.902 16:24:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:57.902 16:24:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:57.902 16:24:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:57.902 16:24:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:57.902 16:24:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:57.902 16:24:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:57.903 16:24:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:57.903 16:24:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:57.903 16:24:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:57.903 16:24:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:58.162 16:24:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:58.162 16:24:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:58.162 16:24:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:58.162 16:24:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:58.162 16:24:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:58.162 16:24:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:58.162 16:24:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:58.162 16:24:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:58.162 16:24:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:58.162 16:24:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:58.162 16:24:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:58.162 16:24:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:58.162 16:24:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:58.162 16:24:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:58.162 16:24:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:58.162 16:24:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:58.162 16:24:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:58.162 16:24:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:58.162 16:24:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:58.162 16:24:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:58.421 16:24:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:58.421 16:24:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:58.421 16:24:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:58.421 16:24:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:58.681 16:24:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:58.681 16:24:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:58.681 16:24:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:58.681 16:24:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:58.681 16:24:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:58.681 16:24:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:58.681 16:24:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:58.681 16:24:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:58.681 16:24:40 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:58.972 16:24:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:58.972 [2024-12-06 16:24:40.669403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:58.972 [2024-12-06 16:24:40.698660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.972 [2024-12-06 16:24:40.698661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:58.972 [2024-12-06 16:24:40.742152] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:58.972 [2024-12-06 16:24:40.742367] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:02.266 16:24:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:02.266 16:24:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:02.266 spdk_app_start Round 1 00:09:02.266 16:24:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70696 /var/tmp/spdk-nbd.sock 00:09:02.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:02.266 16:24:43 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70696 ']' 00:09:02.266 16:24:43 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:02.266 16:24:43 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:02.266 16:24:43 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:02.266 16:24:43 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:02.266 16:24:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:02.266 16:24:43 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:02.266 16:24:43 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:02.266 16:24:43 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:02.266 Malloc0 00:09:02.266 16:24:43 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:02.525 Malloc1 00:09:02.525 16:24:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:02.525 16:24:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:02.525 16:24:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:02.525 16:24:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:02.525 16:24:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:02.525 16:24:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:02.525 16:24:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:02.525 16:24:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:02.525 16:24:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:02.525 16:24:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:02.525 16:24:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:02.525 16:24:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:02.525 16:24:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:02.525 16:24:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:02.525 16:24:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:02.525 16:24:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:02.785 /dev/nbd0 00:09:02.785 16:24:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:02.785 16:24:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:02.785 16:24:44 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:02.785 16:24:44 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:02.785 16:24:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:02.785 16:24:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:02.785 16:24:44 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:02.785 16:24:44 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:02.785 16:24:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:02.785 16:24:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:02.785 16:24:44 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:02.785 1+0 records in 00:09:02.785 1+0 records out 00:09:02.785 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00045834 s, 8.9 MB/s 00:09:02.785 16:24:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:02.785 16:24:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:02.785 16:24:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:02.785 16:24:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:02.785 16:24:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:02.785 16:24:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:02.785 16:24:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:02.785 16:24:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:03.044 /dev/nbd1 00:09:03.044 16:24:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:03.044 16:24:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:03.044 16:24:44 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:03.044 16:24:44 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:03.044 16:24:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:03.044 16:24:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:03.044 16:24:44 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:03.044 16:24:44 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:03.044 16:24:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:03.044 16:24:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:03.044 16:24:44 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:03.044 1+0 records in 00:09:03.044 1+0 records out 00:09:03.044 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000555648 s, 7.4 MB/s 00:09:03.044 16:24:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:03.044 16:24:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:03.044 16:24:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:03.044 16:24:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:03.044 16:24:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:03.044 16:24:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:03.044 16:24:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:03.044 16:24:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:03.044 16:24:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:03.044 16:24:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:03.304 16:24:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:03.304 { 00:09:03.304 "nbd_device": "/dev/nbd0", 00:09:03.304 "bdev_name": "Malloc0" 00:09:03.304 }, 00:09:03.304 { 00:09:03.304 "nbd_device": "/dev/nbd1", 00:09:03.304 "bdev_name": "Malloc1" 00:09:03.304 } 00:09:03.304 ]' 00:09:03.304 16:24:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:03.304 16:24:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:03.304 { 00:09:03.304 "nbd_device": "/dev/nbd0", 00:09:03.304 "bdev_name": "Malloc0" 00:09:03.304 }, 00:09:03.304 { 00:09:03.304 "nbd_device": "/dev/nbd1", 00:09:03.304 "bdev_name": "Malloc1" 00:09:03.304 } 00:09:03.304 ]' 00:09:03.304 16:24:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:03.304 /dev/nbd1' 00:09:03.304 16:24:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:03.304 /dev/nbd1' 00:09:03.304 16:24:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:03.304 16:24:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:03.304 16:24:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:03.304 16:24:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:03.304 16:24:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:03.304 16:24:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:03.304 16:24:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:03.304 16:24:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:03.304 16:24:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:03.304 16:24:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:03.304 16:24:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:03.304 16:24:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:03.563 256+0 records in 00:09:03.563 256+0 records out 00:09:03.563 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143377 s, 73.1 MB/s 00:09:03.563 16:24:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:03.563 16:24:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:03.563 256+0 records in 00:09:03.563 256+0 records out 00:09:03.563 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0214496 s, 48.9 MB/s 00:09:03.563 16:24:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:03.563 16:24:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:03.563 256+0 records in 00:09:03.563 256+0 records out 00:09:03.563 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0239611 s, 43.8 MB/s 00:09:03.563 16:24:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:03.563 16:24:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:03.563 16:24:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:03.563 16:24:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:03.563 16:24:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:03.563 16:24:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:03.563 16:24:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:03.563 16:24:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:03.563 16:24:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:03.563 16:24:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:03.563 16:24:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:03.563 16:24:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:03.563 16:24:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:03.563 16:24:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:03.563 16:24:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:03.563 16:24:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:03.563 16:24:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:03.563 16:24:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:03.563 16:24:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:03.822 16:24:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:03.822 16:24:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:03.822 16:24:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:03.822 16:24:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:03.822 16:24:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:03.822 16:24:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:03.822 16:24:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:03.822 16:24:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:03.822 16:24:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:03.822 16:24:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:04.081 16:24:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:04.081 16:24:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:04.081 16:24:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:04.081 16:24:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:04.081 16:24:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:04.081 16:24:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:04.081 16:24:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:04.081 16:24:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:04.081 16:24:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:04.081 16:24:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:04.081 16:24:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:04.340 16:24:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:04.340 16:24:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:04.340 16:24:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:04.340 16:24:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:04.340 16:24:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:04.340 16:24:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:04.340 16:24:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:04.340 16:24:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:04.340 16:24:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:04.340 16:24:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:04.340 16:24:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:04.340 16:24:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:04.340 16:24:45 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:04.598 16:24:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:04.598 [2024-12-06 16:24:46.371807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:04.598 [2024-12-06 16:24:46.402279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.599 [2024-12-06 16:24:46.402314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:04.857 [2024-12-06 16:24:46.444522] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:04.857 [2024-12-06 16:24:46.444590] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:08.147 spdk_app_start Round 2 00:09:08.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:08.147 16:24:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:08.147 16:24:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:08.147 16:24:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70696 /var/tmp/spdk-nbd.sock 00:09:08.147 16:24:49 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70696 ']' 00:09:08.147 16:24:49 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:08.147 16:24:49 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:08.147 16:24:49 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:08.147 16:24:49 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:08.147 16:24:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:08.147 16:24:49 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:08.147 16:24:49 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:08.147 16:24:49 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:08.147 Malloc0 00:09:08.147 16:24:49 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:08.147 Malloc1 00:09:08.147 16:24:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:08.147 16:24:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:08.147 16:24:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:08.147 16:24:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:08.147 16:24:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:08.147 16:24:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:08.147 16:24:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:08.148 16:24:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:08.148 16:24:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:08.148 16:24:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:08.148 16:24:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:08.148 16:24:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:08.148 16:24:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:08.148 16:24:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:08.148 16:24:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:08.148 16:24:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:08.407 /dev/nbd0 00:09:08.407 16:24:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:08.407 16:24:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:08.407 16:24:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:08.407 16:24:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:08.407 16:24:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:08.407 16:24:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:08.407 16:24:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:08.407 16:24:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:08.407 16:24:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:08.407 16:24:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:08.407 16:24:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:08.407 1+0 records in 00:09:08.407 1+0 records out 00:09:08.407 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000338861 s, 12.1 MB/s 00:09:08.407 16:24:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:08.407 16:24:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:08.407 16:24:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:08.407 16:24:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:08.407 16:24:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:08.407 16:24:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:08.407 16:24:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:08.407 16:24:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:08.779 /dev/nbd1 00:09:08.779 16:24:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:08.779 16:24:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:08.779 16:24:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:08.779 16:24:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:08.779 16:24:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:08.779 16:24:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:08.779 16:24:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:08.779 16:24:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:08.779 16:24:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:08.779 16:24:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:08.779 16:24:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:08.779 1+0 records in 00:09:08.779 1+0 records out 00:09:08.779 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272497 s, 15.0 MB/s 00:09:08.779 16:24:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:08.779 16:24:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:08.780 16:24:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:08.780 16:24:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:08.780 16:24:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:08.780 16:24:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:08.780 16:24:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:08.780 16:24:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:08.780 16:24:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:08.780 16:24:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:09.038 16:24:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:09.038 { 00:09:09.038 "nbd_device": "/dev/nbd0", 00:09:09.038 "bdev_name": "Malloc0" 00:09:09.038 }, 00:09:09.038 { 00:09:09.038 "nbd_device": "/dev/nbd1", 00:09:09.038 "bdev_name": "Malloc1" 00:09:09.038 } 00:09:09.038 ]' 00:09:09.038 16:24:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:09.038 { 00:09:09.038 "nbd_device": "/dev/nbd0", 00:09:09.038 "bdev_name": "Malloc0" 00:09:09.038 }, 00:09:09.038 { 00:09:09.038 "nbd_device": "/dev/nbd1", 00:09:09.038 "bdev_name": "Malloc1" 00:09:09.038 } 00:09:09.038 ]' 00:09:09.038 16:24:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:09.038 16:24:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:09.038 /dev/nbd1' 00:09:09.038 16:24:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:09.038 /dev/nbd1' 00:09:09.038 16:24:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:09.038 16:24:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:09.038 16:24:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:09.038 16:24:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:09.038 16:24:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:09.038 16:24:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:09.038 16:24:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:09.038 16:24:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:09.039 16:24:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:09.039 16:24:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:09.039 16:24:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:09.039 16:24:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:09.039 256+0 records in 00:09:09.039 256+0 records out 00:09:09.039 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121916 s, 86.0 MB/s 00:09:09.039 16:24:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:09.039 16:24:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:09.039 256+0 records in 00:09:09.039 256+0 records out 00:09:09.039 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.022538 s, 46.5 MB/s 00:09:09.039 16:24:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:09.039 16:24:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:09.039 256+0 records in 00:09:09.039 256+0 records out 00:09:09.039 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246764 s, 42.5 MB/s 00:09:09.039 16:24:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:09.039 16:24:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:09.039 16:24:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:09.039 16:24:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:09.039 16:24:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:09.039 16:24:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:09.039 16:24:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:09.039 16:24:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:09.039 16:24:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:09.039 16:24:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:09.039 16:24:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:09.039 16:24:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:09.039 16:24:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:09.039 16:24:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:09.039 16:24:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:09.039 16:24:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:09.039 16:24:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:09.039 16:24:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:09.039 16:24:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:09.296 16:24:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:09.296 16:24:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:09.296 16:24:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:09.296 16:24:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:09.296 16:24:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:09.296 16:24:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:09.296 16:24:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:09.296 16:24:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:09.296 16:24:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:09.296 16:24:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:09.554 16:24:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:09.554 16:24:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:09.554 16:24:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:09.554 16:24:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:09.554 16:24:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:09.554 16:24:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:09.554 16:24:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:09.554 16:24:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:09.554 16:24:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:09.554 16:24:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:09.554 16:24:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:09.814 16:24:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:09.814 16:24:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:09.814 16:24:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:09.814 16:24:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:09.814 16:24:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:09.814 16:24:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:09.814 16:24:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:09.814 16:24:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:09.814 16:24:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:09.814 16:24:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:09.814 16:24:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:09.814 16:24:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:09.814 16:24:51 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:10.072 16:24:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:10.330 [2024-12-06 16:24:51.999635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:10.330 [2024-12-06 16:24:52.027121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:10.330 [2024-12-06 16:24:52.027121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.330 [2024-12-06 16:24:52.070945] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:10.330 [2024-12-06 16:24:52.071011] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:13.617 16:24:54 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70696 /var/tmp/spdk-nbd.sock 00:09:13.617 16:24:54 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70696 ']' 00:09:13.617 16:24:54 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:13.617 16:24:54 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:13.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:13.617 16:24:54 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:13.617 16:24:54 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:13.617 16:24:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:13.617 16:24:55 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:13.617 16:24:55 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:13.617 16:24:55 event.app_repeat -- event/event.sh@39 -- # killprocess 70696 00:09:13.617 16:24:55 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 70696 ']' 00:09:13.617 16:24:55 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 70696 00:09:13.617 16:24:55 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:09:13.617 16:24:55 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:13.617 16:24:55 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70696 00:09:13.617 killing process with pid 70696 00:09:13.617 16:24:55 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:13.617 16:24:55 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:13.617 16:24:55 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70696' 00:09:13.617 16:24:55 event.app_repeat -- common/autotest_common.sh@973 -- # kill 70696 00:09:13.617 16:24:55 event.app_repeat -- common/autotest_common.sh@978 -- # wait 70696 00:09:13.617 spdk_app_start is called in Round 0. 00:09:13.617 Shutdown signal received, stop current app iteration 00:09:13.617 Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 reinitialization... 00:09:13.617 spdk_app_start is called in Round 1. 00:09:13.617 Shutdown signal received, stop current app iteration 00:09:13.617 Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 reinitialization... 00:09:13.617 spdk_app_start is called in Round 2. 00:09:13.617 Shutdown signal received, stop current app iteration 00:09:13.617 Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 reinitialization... 00:09:13.617 spdk_app_start is called in Round 3. 00:09:13.617 Shutdown signal received, stop current app iteration 00:09:13.617 16:24:55 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:13.617 16:24:55 event.app_repeat -- event/event.sh@42 -- # return 0 00:09:13.617 00:09:13.617 real 0m18.167s 00:09:13.617 user 0m40.530s 00:09:13.617 sys 0m2.871s 00:09:13.617 16:24:55 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:13.617 ************************************ 00:09:13.617 END TEST app_repeat 00:09:13.617 ************************************ 00:09:13.617 16:24:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:13.617 16:24:55 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:13.617 16:24:55 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:13.617 16:24:55 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:13.617 16:24:55 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:13.617 16:24:55 event -- common/autotest_common.sh@10 -- # set +x 00:09:13.617 ************************************ 00:09:13.618 START TEST cpu_locks 00:09:13.618 ************************************ 00:09:13.618 16:24:55 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:13.876 * Looking for test storage... 00:09:13.876 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:13.877 16:24:55 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:13.877 16:24:55 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:09:13.877 16:24:55 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:13.877 16:24:55 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:13.877 16:24:55 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:13.877 16:24:55 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:13.877 16:24:55 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:13.877 16:24:55 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:09:13.877 16:24:55 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:09:13.877 16:24:55 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:09:13.877 16:24:55 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:09:13.877 16:24:55 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:09:13.877 16:24:55 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:09:13.877 16:24:55 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:09:13.877 16:24:55 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:13.877 16:24:55 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:09:13.877 16:24:55 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:09:13.877 16:24:55 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:13.877 16:24:55 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:13.877 16:24:55 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:09:13.877 16:24:55 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:09:13.877 16:24:55 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:13.877 16:24:55 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:09:13.877 16:24:55 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:09:13.877 16:24:55 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:09:13.877 16:24:55 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:09:13.877 16:24:55 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:13.877 16:24:55 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:09:13.877 16:24:55 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:09:13.877 16:24:55 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:13.877 16:24:55 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:13.877 16:24:55 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:09:13.877 16:24:55 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:13.877 16:24:55 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:13.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.877 --rc genhtml_branch_coverage=1 00:09:13.877 --rc genhtml_function_coverage=1 00:09:13.877 --rc genhtml_legend=1 00:09:13.877 --rc geninfo_all_blocks=1 00:09:13.877 --rc geninfo_unexecuted_blocks=1 00:09:13.877 00:09:13.877 ' 00:09:13.877 16:24:55 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:13.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.877 --rc genhtml_branch_coverage=1 00:09:13.877 --rc genhtml_function_coverage=1 00:09:13.877 --rc genhtml_legend=1 00:09:13.877 --rc geninfo_all_blocks=1 00:09:13.877 --rc geninfo_unexecuted_blocks=1 00:09:13.877 00:09:13.877 ' 00:09:13.877 16:24:55 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:13.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.877 --rc genhtml_branch_coverage=1 00:09:13.877 --rc genhtml_function_coverage=1 00:09:13.877 --rc genhtml_legend=1 00:09:13.877 --rc geninfo_all_blocks=1 00:09:13.877 --rc geninfo_unexecuted_blocks=1 00:09:13.877 00:09:13.877 ' 00:09:13.877 16:24:55 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:13.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.877 --rc genhtml_branch_coverage=1 00:09:13.877 --rc genhtml_function_coverage=1 00:09:13.877 --rc genhtml_legend=1 00:09:13.877 --rc geninfo_all_blocks=1 00:09:13.877 --rc geninfo_unexecuted_blocks=1 00:09:13.877 00:09:13.877 ' 00:09:13.877 16:24:55 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:13.877 16:24:55 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:13.877 16:24:55 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:13.877 16:24:55 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:13.877 16:24:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:13.877 16:24:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:13.877 16:24:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:13.877 ************************************ 00:09:13.877 START TEST default_locks 00:09:13.877 ************************************ 00:09:13.877 16:24:55 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:09:13.877 16:24:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=71131 00:09:13.877 16:24:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 71131 00:09:13.877 16:24:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:13.877 16:24:55 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 71131 ']' 00:09:13.877 16:24:55 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.877 16:24:55 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:13.877 16:24:55 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.877 16:24:55 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:13.877 16:24:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:14.136 [2024-12-06 16:24:55.721165] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:09:14.136 [2024-12-06 16:24:55.721532] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71131 ] 00:09:14.136 [2024-12-06 16:24:55.891804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.136 [2024-12-06 16:24:55.921101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.076 16:24:56 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:15.076 16:24:56 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:09:15.076 16:24:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 71131 00:09:15.076 16:24:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 71131 00:09:15.076 16:24:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:15.336 16:24:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 71131 00:09:15.336 16:24:56 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 71131 ']' 00:09:15.336 16:24:56 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 71131 00:09:15.336 16:24:56 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:09:15.336 16:24:57 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:15.336 16:24:57 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71131 00:09:15.336 16:24:57 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:15.336 16:24:57 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:15.336 16:24:57 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71131' 00:09:15.336 killing process with pid 71131 00:09:15.336 16:24:57 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 71131 00:09:15.336 16:24:57 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 71131 00:09:15.597 16:24:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 71131 00:09:15.597 16:24:57 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:09:15.597 16:24:57 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 71131 00:09:15.597 16:24:57 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:15.597 16:24:57 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:15.597 16:24:57 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:15.597 16:24:57 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:15.597 16:24:57 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 71131 00:09:15.597 16:24:57 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 71131 ']' 00:09:15.597 16:24:57 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.597 16:24:57 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:15.597 16:24:57 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.597 16:24:57 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:15.597 16:24:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:15.597 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (71131) - No such process 00:09:15.597 ERROR: process (pid: 71131) is no longer running 00:09:15.597 16:24:57 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:15.597 16:24:57 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:09:15.597 16:24:57 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:09:15.597 16:24:57 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:15.597 16:24:57 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:15.597 16:24:57 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:15.597 16:24:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:09:15.597 16:24:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:15.597 16:24:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:09:15.597 16:24:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:15.597 00:09:15.597 real 0m1.801s 00:09:15.597 user 0m1.773s 00:09:15.597 sys 0m0.628s 00:09:15.597 16:24:57 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:15.597 16:24:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:15.597 ************************************ 00:09:15.597 END TEST default_locks 00:09:15.597 ************************************ 00:09:15.982 16:24:57 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:15.982 16:24:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:15.982 16:24:57 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:15.982 16:24:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:15.982 ************************************ 00:09:15.982 START TEST default_locks_via_rpc 00:09:15.982 ************************************ 00:09:15.982 16:24:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:09:15.982 16:24:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=71185 00:09:15.982 16:24:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:15.982 16:24:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 71185 00:09:15.982 16:24:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71185 ']' 00:09:15.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.982 16:24:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.982 16:24:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:15.982 16:24:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.982 16:24:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:15.982 16:24:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.982 [2024-12-06 16:24:57.593729] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:09:15.982 [2024-12-06 16:24:57.593891] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71185 ] 00:09:15.982 [2024-12-06 16:24:57.745735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.982 [2024-12-06 16:24:57.775808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.935 16:24:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:16.935 16:24:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:16.935 16:24:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:16.935 16:24:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.935 16:24:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.935 16:24:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.935 16:24:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:09:16.935 16:24:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:16.935 16:24:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:09:16.935 16:24:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:16.935 16:24:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:16.935 16:24:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.935 16:24:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.935 16:24:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.935 16:24:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 71185 00:09:16.935 16:24:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 71185 00:09:16.935 16:24:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:17.196 16:24:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 71185 00:09:17.196 16:24:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 71185 ']' 00:09:17.196 16:24:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 71185 00:09:17.196 16:24:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:09:17.196 16:24:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:17.196 16:24:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71185 00:09:17.196 killing process with pid 71185 00:09:17.196 16:24:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:17.196 16:24:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:17.196 16:24:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71185' 00:09:17.196 16:24:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 71185 00:09:17.196 16:24:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 71185 00:09:17.456 00:09:17.456 real 0m1.761s 00:09:17.456 user 0m1.796s 00:09:17.456 sys 0m0.559s 00:09:17.456 ************************************ 00:09:17.456 END TEST default_locks_via_rpc 00:09:17.456 ************************************ 00:09:17.456 16:24:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:17.456 16:24:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.716 16:24:59 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:17.716 16:24:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:17.716 16:24:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:17.716 16:24:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:17.716 ************************************ 00:09:17.716 START TEST non_locking_app_on_locked_coremask 00:09:17.716 ************************************ 00:09:17.716 16:24:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:09:17.716 16:24:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=71231 00:09:17.716 16:24:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:17.716 16:24:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 71231 /var/tmp/spdk.sock 00:09:17.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.716 16:24:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71231 ']' 00:09:17.716 16:24:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.716 16:24:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:17.716 16:24:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.716 16:24:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:17.716 16:24:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:17.716 [2024-12-06 16:24:59.422531] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:09:17.716 [2024-12-06 16:24:59.422659] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71231 ] 00:09:17.975 [2024-12-06 16:24:59.593822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.975 [2024-12-06 16:24:59.620466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.544 16:25:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:18.544 16:25:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:18.544 16:25:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=71247 00:09:18.544 16:25:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:18.544 16:25:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 71247 /var/tmp/spdk2.sock 00:09:18.544 16:25:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71247 ']' 00:09:18.544 16:25:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:18.544 16:25:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:18.544 16:25:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:18.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:18.544 16:25:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:18.544 16:25:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:18.544 [2024-12-06 16:25:00.349473] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:09:18.544 [2024-12-06 16:25:00.349788] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71247 ] 00:09:18.803 [2024-12-06 16:25:00.526787] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:18.803 [2024-12-06 16:25:00.526881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.803 [2024-12-06 16:25:00.585850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.391 16:25:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:19.391 16:25:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:19.391 16:25:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 71231 00:09:19.391 16:25:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71231 00:09:19.391 16:25:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:19.651 16:25:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 71231 00:09:19.651 16:25:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71231 ']' 00:09:19.651 16:25:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 71231 00:09:19.651 16:25:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:19.911 16:25:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:19.911 16:25:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71231 00:09:19.911 killing process with pid 71231 00:09:19.911 16:25:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:19.911 16:25:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:19.911 16:25:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71231' 00:09:19.911 16:25:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 71231 00:09:19.911 16:25:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 71231 00:09:20.518 16:25:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 71247 00:09:20.518 16:25:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71247 ']' 00:09:20.518 16:25:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 71247 00:09:20.518 16:25:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:20.518 16:25:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:20.518 16:25:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71247 00:09:20.518 killing process with pid 71247 00:09:20.518 16:25:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:20.518 16:25:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:20.518 16:25:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71247' 00:09:20.518 16:25:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 71247 00:09:20.518 16:25:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 71247 00:09:21.086 ************************************ 00:09:21.086 END TEST non_locking_app_on_locked_coremask 00:09:21.086 ************************************ 00:09:21.086 00:09:21.086 real 0m3.304s 00:09:21.086 user 0m3.527s 00:09:21.086 sys 0m0.990s 00:09:21.086 16:25:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.086 16:25:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:21.086 16:25:02 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:21.086 16:25:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:21.086 16:25:02 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.086 16:25:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:21.086 ************************************ 00:09:21.086 START TEST locking_app_on_unlocked_coremask 00:09:21.086 ************************************ 00:09:21.086 16:25:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:09:21.086 16:25:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=71311 00:09:21.086 16:25:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:21.086 16:25:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 71311 /var/tmp/spdk.sock 00:09:21.086 16:25:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71311 ']' 00:09:21.086 16:25:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.086 16:25:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:21.086 16:25:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.086 16:25:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:21.086 16:25:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:21.086 [2024-12-06 16:25:02.788530] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:09:21.086 [2024-12-06 16:25:02.788738] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71311 ] 00:09:21.346 [2024-12-06 16:25:02.941268] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:21.346 [2024-12-06 16:25:02.941455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.346 [2024-12-06 16:25:02.970972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.915 16:25:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:21.915 16:25:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:21.915 16:25:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:21.915 16:25:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=71327 00:09:21.915 16:25:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 71327 /var/tmp/spdk2.sock 00:09:21.915 16:25:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71327 ']' 00:09:21.915 16:25:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:21.915 16:25:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:21.915 16:25:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:21.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:21.915 16:25:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:21.915 16:25:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:21.915 [2024-12-06 16:25:03.708464] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:09:21.915 [2024-12-06 16:25:03.708670] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71327 ] 00:09:22.175 [2024-12-06 16:25:03.876629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.175 [2024-12-06 16:25:03.937417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.116 16:25:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:23.116 16:25:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:23.116 16:25:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 71327 00:09:23.116 16:25:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71327 00:09:23.116 16:25:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:23.377 16:25:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 71311 00:09:23.377 16:25:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71311 ']' 00:09:23.377 16:25:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 71311 00:09:23.377 16:25:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:23.377 16:25:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:23.377 16:25:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71311 00:09:23.377 16:25:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:23.377 16:25:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:23.377 16:25:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71311' 00:09:23.377 killing process with pid 71311 00:09:23.377 16:25:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 71311 00:09:23.377 16:25:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 71311 00:09:23.945 16:25:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 71327 00:09:23.945 16:25:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71327 ']' 00:09:23.945 16:25:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 71327 00:09:23.945 16:25:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:23.945 16:25:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:23.945 16:25:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71327 00:09:24.203 killing process with pid 71327 00:09:24.203 16:25:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:24.203 16:25:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:24.203 16:25:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71327' 00:09:24.203 16:25:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 71327 00:09:24.203 16:25:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 71327 00:09:24.462 00:09:24.462 real 0m3.472s 00:09:24.462 user 0m3.666s 00:09:24.462 sys 0m1.068s 00:09:24.462 ************************************ 00:09:24.462 END TEST locking_app_on_unlocked_coremask 00:09:24.462 ************************************ 00:09:24.462 16:25:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.462 16:25:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:24.462 16:25:06 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:24.462 16:25:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:24.462 16:25:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.462 16:25:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:24.462 ************************************ 00:09:24.462 START TEST locking_app_on_locked_coremask 00:09:24.462 ************************************ 00:09:24.462 16:25:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:09:24.462 16:25:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=71385 00:09:24.462 16:25:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 71385 /var/tmp/spdk.sock 00:09:24.462 16:25:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:24.462 16:25:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71385 ']' 00:09:24.462 16:25:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.462 16:25:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:24.462 16:25:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.462 16:25:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:24.462 16:25:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:24.721 [2024-12-06 16:25:06.339724] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:09:24.721 [2024-12-06 16:25:06.339853] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71385 ] 00:09:24.721 [2024-12-06 16:25:06.511922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.721 [2024-12-06 16:25:06.541146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.658 16:25:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:25.658 16:25:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:25.658 16:25:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:25.658 16:25:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=71401 00:09:25.658 16:25:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 71401 /var/tmp/spdk2.sock 00:09:25.658 16:25:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:25.658 16:25:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 71401 /var/tmp/spdk2.sock 00:09:25.658 16:25:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:25.658 16:25:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:25.658 16:25:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:25.658 16:25:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:25.658 16:25:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 71401 /var/tmp/spdk2.sock 00:09:25.658 16:25:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71401 ']' 00:09:25.658 16:25:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:25.658 16:25:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:25.658 16:25:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:25.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:25.658 16:25:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:25.658 16:25:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:25.658 [2024-12-06 16:25:07.304076] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:09:25.658 [2024-12-06 16:25:07.304304] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71401 ] 00:09:25.659 [2024-12-06 16:25:07.472152] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 71385 has claimed it. 00:09:25.659 [2024-12-06 16:25:07.472257] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:26.226 ERROR: process (pid: 71401) is no longer running 00:09:26.226 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (71401) - No such process 00:09:26.226 16:25:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:26.226 16:25:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:26.226 16:25:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:26.226 16:25:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:26.226 16:25:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:26.226 16:25:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:26.226 16:25:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 71385 00:09:26.226 16:25:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71385 00:09:26.226 16:25:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:26.794 16:25:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 71385 00:09:26.794 16:25:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71385 ']' 00:09:26.794 16:25:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 71385 00:09:26.794 16:25:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:26.794 16:25:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:26.794 16:25:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71385 00:09:26.794 killing process with pid 71385 00:09:26.794 16:25:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:26.794 16:25:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:26.794 16:25:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71385' 00:09:26.794 16:25:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 71385 00:09:26.794 16:25:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 71385 00:09:27.052 00:09:27.052 real 0m2.594s 00:09:27.052 user 0m2.877s 00:09:27.052 sys 0m0.766s 00:09:27.052 16:25:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:27.052 16:25:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:27.052 ************************************ 00:09:27.052 END TEST locking_app_on_locked_coremask 00:09:27.052 ************************************ 00:09:27.052 16:25:08 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:27.052 16:25:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:27.052 16:25:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:27.052 16:25:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:27.310 ************************************ 00:09:27.310 START TEST locking_overlapped_coremask 00:09:27.310 ************************************ 00:09:27.310 16:25:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:09:27.310 16:25:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=71454 00:09:27.310 16:25:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:09:27.310 16:25:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 71454 /var/tmp/spdk.sock 00:09:27.310 16:25:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 71454 ']' 00:09:27.310 16:25:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.310 16:25:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:27.310 16:25:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.310 16:25:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:27.310 16:25:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:27.311 [2024-12-06 16:25:09.000260] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:09:27.311 [2024-12-06 16:25:09.000479] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71454 ] 00:09:27.569 [2024-12-06 16:25:09.173199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:27.569 [2024-12-06 16:25:09.204810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.569 [2024-12-06 16:25:09.204957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.569 [2024-12-06 16:25:09.205065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:28.136 16:25:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:28.136 16:25:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:28.136 16:25:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71472 00:09:28.136 16:25:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:28.136 16:25:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71472 /var/tmp/spdk2.sock 00:09:28.136 16:25:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:28.136 16:25:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 71472 /var/tmp/spdk2.sock 00:09:28.136 16:25:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:28.136 16:25:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:28.136 16:25:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:28.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:28.136 16:25:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:28.136 16:25:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 71472 /var/tmp/spdk2.sock 00:09:28.136 16:25:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 71472 ']' 00:09:28.136 16:25:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:28.136 16:25:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:28.136 16:25:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:28.136 16:25:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:28.136 16:25:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:28.394 [2024-12-06 16:25:10.020555] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:09:28.394 [2024-12-06 16:25:10.020721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71472 ] 00:09:28.394 [2024-12-06 16:25:10.193620] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71454 has claimed it. 00:09:28.394 [2024-12-06 16:25:10.193709] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:28.960 ERROR: process (pid: 71472) is no longer running 00:09:28.961 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (71472) - No such process 00:09:28.961 16:25:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:28.961 16:25:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:28.961 16:25:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:28.961 16:25:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:28.961 16:25:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:28.961 16:25:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:28.961 16:25:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:28.961 16:25:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:28.961 16:25:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:28.961 16:25:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:28.961 16:25:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 71454 00:09:28.961 16:25:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 71454 ']' 00:09:28.961 16:25:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 71454 00:09:28.961 16:25:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:09:28.961 16:25:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:28.961 16:25:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71454 00:09:28.961 killing process with pid 71454 00:09:28.961 16:25:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:28.961 16:25:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:28.961 16:25:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71454' 00:09:28.961 16:25:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 71454 00:09:28.961 16:25:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 71454 00:09:29.526 00:09:29.526 real 0m2.189s 00:09:29.526 user 0m5.986s 00:09:29.526 sys 0m0.538s 00:09:29.526 ************************************ 00:09:29.526 END TEST locking_overlapped_coremask 00:09:29.526 ************************************ 00:09:29.526 16:25:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.526 16:25:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:29.526 16:25:11 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:29.526 16:25:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:29.526 16:25:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.526 16:25:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:29.526 ************************************ 00:09:29.526 START TEST locking_overlapped_coremask_via_rpc 00:09:29.526 ************************************ 00:09:29.526 16:25:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:09:29.526 16:25:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71514 00:09:29.526 16:25:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:29.526 16:25:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71514 /var/tmp/spdk.sock 00:09:29.526 16:25:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71514 ']' 00:09:29.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.526 16:25:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.526 16:25:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:29.527 16:25:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.527 16:25:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:29.527 16:25:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:29.527 [2024-12-06 16:25:11.255920] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:09:29.527 [2024-12-06 16:25:11.256057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71514 ] 00:09:29.785 [2024-12-06 16:25:11.427527] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:29.785 [2024-12-06 16:25:11.427613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:29.785 [2024-12-06 16:25:11.460486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.785 [2024-12-06 16:25:11.460464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.785 [2024-12-06 16:25:11.460607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:30.353 16:25:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:30.353 16:25:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:30.353 16:25:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71532 00:09:30.353 16:25:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71532 /var/tmp/spdk2.sock 00:09:30.353 16:25:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:30.353 16:25:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71532 ']' 00:09:30.353 16:25:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:30.353 16:25:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:30.353 16:25:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:30.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:30.353 16:25:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:30.353 16:25:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.615 [2024-12-06 16:25:12.243978] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:09:30.615 [2024-12-06 16:25:12.244197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71532 ] 00:09:30.615 [2024-12-06 16:25:12.419526] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:30.615 [2024-12-06 16:25:12.419619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:30.873 [2024-12-06 16:25:12.480350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:30.873 [2024-12-06 16:25:12.483390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:30.873 [2024-12-06 16:25:12.483477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:31.442 16:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:31.442 16:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:31.442 16:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:31.442 16:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.442 16:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:31.442 16:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.442 16:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:31.442 16:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:09:31.442 16:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:31.443 16:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:31.443 16:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:31.443 16:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:31.443 16:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:31.443 16:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:31.443 16:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.443 16:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:31.443 [2024-12-06 16:25:13.116494] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71514 has claimed it. 00:09:31.443 request: 00:09:31.443 { 00:09:31.443 "method": "framework_enable_cpumask_locks", 00:09:31.443 "req_id": 1 00:09:31.443 } 00:09:31.443 Got JSON-RPC error response 00:09:31.443 response: 00:09:31.443 { 00:09:31.443 "code": -32603, 00:09:31.443 "message": "Failed to claim CPU core: 2" 00:09:31.443 } 00:09:31.443 16:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:31.443 16:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:09:31.443 16:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:31.443 16:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:31.443 16:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:31.443 16:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71514 /var/tmp/spdk.sock 00:09:31.443 16:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71514 ']' 00:09:31.443 16:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.443 16:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:31.443 16:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.443 16:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:31.443 16:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:31.703 16:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:31.703 16:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:31.703 16:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71532 /var/tmp/spdk2.sock 00:09:31.703 16:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71532 ']' 00:09:31.703 16:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:31.703 16:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:31.703 16:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:31.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:31.703 16:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:31.703 16:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:31.963 16:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:31.963 16:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:31.963 16:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:31.963 16:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:31.963 16:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:31.963 16:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:31.963 00:09:31.963 real 0m2.469s 00:09:31.963 user 0m1.216s 00:09:31.963 sys 0m0.181s 00:09:31.963 16:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.963 16:25:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:31.963 ************************************ 00:09:31.963 END TEST locking_overlapped_coremask_via_rpc 00:09:31.963 ************************************ 00:09:31.963 16:25:13 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:09:31.963 16:25:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71514 ]] 00:09:31.963 16:25:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71514 00:09:31.963 16:25:13 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71514 ']' 00:09:31.963 16:25:13 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71514 00:09:31.963 16:25:13 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:09:31.963 16:25:13 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:31.963 16:25:13 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71514 00:09:31.963 16:25:13 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:31.963 16:25:13 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:31.963 16:25:13 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71514' 00:09:31.963 killing process with pid 71514 00:09:31.963 16:25:13 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 71514 00:09:31.963 16:25:13 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 71514 00:09:32.532 16:25:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71532 ]] 00:09:32.532 16:25:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71532 00:09:32.532 16:25:14 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71532 ']' 00:09:32.532 16:25:14 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71532 00:09:32.532 16:25:14 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:09:32.532 16:25:14 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:32.532 16:25:14 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71532 00:09:32.532 killing process with pid 71532 00:09:32.532 16:25:14 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:09:32.532 16:25:14 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:09:32.532 16:25:14 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71532' 00:09:32.532 16:25:14 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 71532 00:09:32.532 16:25:14 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 71532 00:09:32.791 16:25:14 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:32.791 16:25:14 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:09:32.791 16:25:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71514 ]] 00:09:32.791 Process with pid 71514 is not found 00:09:32.791 16:25:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71514 00:09:32.791 16:25:14 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71514 ']' 00:09:32.791 16:25:14 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71514 00:09:32.791 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71514) - No such process 00:09:32.791 16:25:14 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 71514 is not found' 00:09:32.791 16:25:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71532 ]] 00:09:32.791 16:25:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71532 00:09:32.791 16:25:14 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71532 ']' 00:09:32.791 16:25:14 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71532 00:09:32.791 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71532) - No such process 00:09:32.791 16:25:14 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 71532 is not found' 00:09:32.791 Process with pid 71532 is not found 00:09:32.791 16:25:14 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:32.791 ************************************ 00:09:32.791 END TEST cpu_locks 00:09:32.791 ************************************ 00:09:32.791 00:09:32.791 real 0m19.162s 00:09:32.791 user 0m32.935s 00:09:32.791 sys 0m5.863s 00:09:32.791 16:25:14 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.791 16:25:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:32.791 00:09:32.791 real 0m47.886s 00:09:32.791 user 1m33.202s 00:09:32.791 sys 0m9.828s 00:09:32.791 ************************************ 00:09:32.791 END TEST event 00:09:32.791 ************************************ 00:09:32.791 16:25:14 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.791 16:25:14 event -- common/autotest_common.sh@10 -- # set +x 00:09:33.051 16:25:14 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:33.051 16:25:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:33.051 16:25:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.051 16:25:14 -- common/autotest_common.sh@10 -- # set +x 00:09:33.051 ************************************ 00:09:33.051 START TEST thread 00:09:33.051 ************************************ 00:09:33.051 16:25:14 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:33.051 * Looking for test storage... 00:09:33.051 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:09:33.051 16:25:14 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:33.051 16:25:14 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:09:33.051 16:25:14 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:33.051 16:25:14 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:33.051 16:25:14 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:33.051 16:25:14 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:33.051 16:25:14 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:33.051 16:25:14 thread -- scripts/common.sh@336 -- # IFS=.-: 00:09:33.051 16:25:14 thread -- scripts/common.sh@336 -- # read -ra ver1 00:09:33.051 16:25:14 thread -- scripts/common.sh@337 -- # IFS=.-: 00:09:33.051 16:25:14 thread -- scripts/common.sh@337 -- # read -ra ver2 00:09:33.051 16:25:14 thread -- scripts/common.sh@338 -- # local 'op=<' 00:09:33.051 16:25:14 thread -- scripts/common.sh@340 -- # ver1_l=2 00:09:33.051 16:25:14 thread -- scripts/common.sh@341 -- # ver2_l=1 00:09:33.051 16:25:14 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:33.051 16:25:14 thread -- scripts/common.sh@344 -- # case "$op" in 00:09:33.051 16:25:14 thread -- scripts/common.sh@345 -- # : 1 00:09:33.051 16:25:14 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:33.051 16:25:14 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:33.051 16:25:14 thread -- scripts/common.sh@365 -- # decimal 1 00:09:33.051 16:25:14 thread -- scripts/common.sh@353 -- # local d=1 00:09:33.051 16:25:14 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:33.051 16:25:14 thread -- scripts/common.sh@355 -- # echo 1 00:09:33.051 16:25:14 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:09:33.051 16:25:14 thread -- scripts/common.sh@366 -- # decimal 2 00:09:33.051 16:25:14 thread -- scripts/common.sh@353 -- # local d=2 00:09:33.051 16:25:14 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:33.051 16:25:14 thread -- scripts/common.sh@355 -- # echo 2 00:09:33.051 16:25:14 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:09:33.051 16:25:14 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:33.051 16:25:14 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:33.051 16:25:14 thread -- scripts/common.sh@368 -- # return 0 00:09:33.051 16:25:14 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:33.051 16:25:14 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:33.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.051 --rc genhtml_branch_coverage=1 00:09:33.051 --rc genhtml_function_coverage=1 00:09:33.051 --rc genhtml_legend=1 00:09:33.051 --rc geninfo_all_blocks=1 00:09:33.051 --rc geninfo_unexecuted_blocks=1 00:09:33.051 00:09:33.051 ' 00:09:33.051 16:25:14 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:33.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.051 --rc genhtml_branch_coverage=1 00:09:33.051 --rc genhtml_function_coverage=1 00:09:33.051 --rc genhtml_legend=1 00:09:33.051 --rc geninfo_all_blocks=1 00:09:33.051 --rc geninfo_unexecuted_blocks=1 00:09:33.051 00:09:33.051 ' 00:09:33.051 16:25:14 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:33.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.051 --rc genhtml_branch_coverage=1 00:09:33.051 --rc genhtml_function_coverage=1 00:09:33.051 --rc genhtml_legend=1 00:09:33.051 --rc geninfo_all_blocks=1 00:09:33.051 --rc geninfo_unexecuted_blocks=1 00:09:33.051 00:09:33.051 ' 00:09:33.051 16:25:14 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:33.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.051 --rc genhtml_branch_coverage=1 00:09:33.051 --rc genhtml_function_coverage=1 00:09:33.051 --rc genhtml_legend=1 00:09:33.051 --rc geninfo_all_blocks=1 00:09:33.051 --rc geninfo_unexecuted_blocks=1 00:09:33.051 00:09:33.051 ' 00:09:33.051 16:25:14 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:33.051 16:25:14 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:09:33.051 16:25:14 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.051 16:25:14 thread -- common/autotest_common.sh@10 -- # set +x 00:09:33.309 ************************************ 00:09:33.309 START TEST thread_poller_perf 00:09:33.309 ************************************ 00:09:33.309 16:25:14 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:33.309 [2024-12-06 16:25:14.933798] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:09:33.309 [2024-12-06 16:25:14.934017] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71665 ] 00:09:33.309 [2024-12-06 16:25:15.103083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.309 [2024-12-06 16:25:15.132628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.309 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:34.689 [2024-12-06T16:25:16.528Z] ====================================== 00:09:34.689 [2024-12-06T16:25:16.528Z] busy:2302209676 (cyc) 00:09:34.689 [2024-12-06T16:25:16.528Z] total_run_count: 383000 00:09:34.689 [2024-12-06T16:25:16.528Z] tsc_hz: 2290000000 (cyc) 00:09:34.689 [2024-12-06T16:25:16.528Z] ====================================== 00:09:34.689 [2024-12-06T16:25:16.528Z] poller_cost: 6010 (cyc), 2624 (nsec) 00:09:34.689 00:09:34.689 real 0m1.315s 00:09:34.689 user 0m1.121s 00:09:34.689 sys 0m0.087s 00:09:34.689 16:25:16 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.689 16:25:16 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:34.689 ************************************ 00:09:34.689 END TEST thread_poller_perf 00:09:34.689 ************************************ 00:09:34.689 16:25:16 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:34.689 16:25:16 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:09:34.689 16:25:16 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.689 16:25:16 thread -- common/autotest_common.sh@10 -- # set +x 00:09:34.689 ************************************ 00:09:34.689 START TEST thread_poller_perf 00:09:34.689 ************************************ 00:09:34.689 16:25:16 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:34.689 [2024-12-06 16:25:16.308749] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:09:34.689 [2024-12-06 16:25:16.308891] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71701 ] 00:09:34.689 [2024-12-06 16:25:16.477432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.689 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:34.689 [2024-12-06 16:25:16.507444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.117 [2024-12-06T16:25:17.956Z] ====================================== 00:09:36.117 [2024-12-06T16:25:17.956Z] busy:2293585902 (cyc) 00:09:36.117 [2024-12-06T16:25:17.956Z] total_run_count: 4445000 00:09:36.117 [2024-12-06T16:25:17.956Z] tsc_hz: 2290000000 (cyc) 00:09:36.117 [2024-12-06T16:25:17.956Z] ====================================== 00:09:36.117 [2024-12-06T16:25:17.956Z] poller_cost: 515 (cyc), 224 (nsec) 00:09:36.117 00:09:36.117 real 0m1.307s 00:09:36.117 user 0m1.115s 00:09:36.117 sys 0m0.085s 00:09:36.117 16:25:17 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.117 ************************************ 00:09:36.117 END TEST thread_poller_perf 00:09:36.117 ************************************ 00:09:36.117 16:25:17 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:36.117 16:25:17 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:09:36.117 ************************************ 00:09:36.117 END TEST thread 00:09:36.117 ************************************ 00:09:36.117 00:09:36.117 real 0m2.958s 00:09:36.117 user 0m2.399s 00:09:36.117 sys 0m0.367s 00:09:36.117 16:25:17 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.117 16:25:17 thread -- common/autotest_common.sh@10 -- # set +x 00:09:36.117 16:25:17 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:09:36.117 16:25:17 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:36.117 16:25:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:36.117 16:25:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.117 16:25:17 -- common/autotest_common.sh@10 -- # set +x 00:09:36.117 ************************************ 00:09:36.117 START TEST app_cmdline 00:09:36.117 ************************************ 00:09:36.117 16:25:17 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:36.117 * Looking for test storage... 00:09:36.117 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:36.117 16:25:17 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:36.117 16:25:17 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:09:36.117 16:25:17 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:36.117 16:25:17 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:36.117 16:25:17 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:36.117 16:25:17 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:36.117 16:25:17 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:36.117 16:25:17 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:09:36.117 16:25:17 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:09:36.117 16:25:17 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:09:36.117 16:25:17 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:09:36.117 16:25:17 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:09:36.117 16:25:17 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:09:36.117 16:25:17 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:09:36.117 16:25:17 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:36.117 16:25:17 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:09:36.117 16:25:17 app_cmdline -- scripts/common.sh@345 -- # : 1 00:09:36.117 16:25:17 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:36.117 16:25:17 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:36.117 16:25:17 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:09:36.117 16:25:17 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:09:36.117 16:25:17 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:36.117 16:25:17 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:09:36.117 16:25:17 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:09:36.117 16:25:17 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:09:36.117 16:25:17 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:09:36.117 16:25:17 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:36.117 16:25:17 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:09:36.117 16:25:17 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:09:36.117 16:25:17 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:36.117 16:25:17 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:36.117 16:25:17 app_cmdline -- scripts/common.sh@368 -- # return 0 00:09:36.117 16:25:17 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:36.117 16:25:17 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:36.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.117 --rc genhtml_branch_coverage=1 00:09:36.117 --rc genhtml_function_coverage=1 00:09:36.117 --rc genhtml_legend=1 00:09:36.117 --rc geninfo_all_blocks=1 00:09:36.117 --rc geninfo_unexecuted_blocks=1 00:09:36.117 00:09:36.117 ' 00:09:36.117 16:25:17 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:36.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.117 --rc genhtml_branch_coverage=1 00:09:36.117 --rc genhtml_function_coverage=1 00:09:36.117 --rc genhtml_legend=1 00:09:36.117 --rc geninfo_all_blocks=1 00:09:36.117 --rc geninfo_unexecuted_blocks=1 00:09:36.117 00:09:36.117 ' 00:09:36.117 16:25:17 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:36.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.118 --rc genhtml_branch_coverage=1 00:09:36.118 --rc genhtml_function_coverage=1 00:09:36.118 --rc genhtml_legend=1 00:09:36.118 --rc geninfo_all_blocks=1 00:09:36.118 --rc geninfo_unexecuted_blocks=1 00:09:36.118 00:09:36.118 ' 00:09:36.118 16:25:17 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:36.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.118 --rc genhtml_branch_coverage=1 00:09:36.118 --rc genhtml_function_coverage=1 00:09:36.118 --rc genhtml_legend=1 00:09:36.118 --rc geninfo_all_blocks=1 00:09:36.118 --rc geninfo_unexecuted_blocks=1 00:09:36.118 00:09:36.118 ' 00:09:36.118 16:25:17 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:36.118 16:25:17 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:36.118 16:25:17 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71779 00:09:36.118 16:25:17 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71779 00:09:36.118 16:25:17 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 71779 ']' 00:09:36.118 16:25:17 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.118 16:25:17 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:36.118 16:25:17 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.118 16:25:17 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:36.118 16:25:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:36.377 [2024-12-06 16:25:17.991572] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:09:36.377 [2024-12-06 16:25:17.991706] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71779 ] 00:09:36.377 [2024-12-06 16:25:18.165649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.377 [2024-12-06 16:25:18.197449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.314 16:25:18 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:37.314 16:25:18 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:09:37.315 16:25:18 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:09:37.315 { 00:09:37.315 "version": "SPDK v25.01-pre git sha1 a5e6ecf28", 00:09:37.315 "fields": { 00:09:37.315 "major": 25, 00:09:37.315 "minor": 1, 00:09:37.315 "patch": 0, 00:09:37.315 "suffix": "-pre", 00:09:37.315 "commit": "a5e6ecf28" 00:09:37.315 } 00:09:37.315 } 00:09:37.315 16:25:19 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:09:37.315 16:25:19 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:37.315 16:25:19 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:37.315 16:25:19 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:37.315 16:25:19 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:37.315 16:25:19 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:37.315 16:25:19 app_cmdline -- app/cmdline.sh@26 -- # sort 00:09:37.315 16:25:19 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.315 16:25:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:37.315 16:25:19 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.315 16:25:19 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:37.315 16:25:19 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:37.315 16:25:19 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:37.315 16:25:19 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:09:37.315 16:25:19 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:37.315 16:25:19 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:37.315 16:25:19 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:37.315 16:25:19 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:37.315 16:25:19 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:37.315 16:25:19 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:37.315 16:25:19 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:37.315 16:25:19 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:37.315 16:25:19 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:37.315 16:25:19 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:37.575 request: 00:09:37.575 { 00:09:37.575 "method": "env_dpdk_get_mem_stats", 00:09:37.575 "req_id": 1 00:09:37.575 } 00:09:37.575 Got JSON-RPC error response 00:09:37.575 response: 00:09:37.575 { 00:09:37.575 "code": -32601, 00:09:37.575 "message": "Method not found" 00:09:37.575 } 00:09:37.575 16:25:19 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:09:37.575 16:25:19 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:37.575 16:25:19 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:37.575 16:25:19 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:37.575 16:25:19 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71779 00:09:37.575 16:25:19 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 71779 ']' 00:09:37.575 16:25:19 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 71779 00:09:37.575 16:25:19 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:09:37.575 16:25:19 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:37.575 16:25:19 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71779 00:09:37.575 16:25:19 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:37.575 16:25:19 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:37.575 killing process with pid 71779 00:09:37.575 16:25:19 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71779' 00:09:37.575 16:25:19 app_cmdline -- common/autotest_common.sh@973 -- # kill 71779 00:09:37.575 16:25:19 app_cmdline -- common/autotest_common.sh@978 -- # wait 71779 00:09:38.145 00:09:38.145 real 0m2.045s 00:09:38.145 user 0m2.354s 00:09:38.145 sys 0m0.535s 00:09:38.145 ************************************ 00:09:38.145 END TEST app_cmdline 00:09:38.145 ************************************ 00:09:38.145 16:25:19 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.145 16:25:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:38.145 16:25:19 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:38.145 16:25:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:38.145 16:25:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.145 16:25:19 -- common/autotest_common.sh@10 -- # set +x 00:09:38.145 ************************************ 00:09:38.145 START TEST version 00:09:38.145 ************************************ 00:09:38.145 16:25:19 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:38.146 * Looking for test storage... 00:09:38.146 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:38.146 16:25:19 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:38.146 16:25:19 version -- common/autotest_common.sh@1711 -- # lcov --version 00:09:38.146 16:25:19 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:38.146 16:25:19 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:38.146 16:25:19 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:38.146 16:25:19 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:38.146 16:25:19 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:38.146 16:25:19 version -- scripts/common.sh@336 -- # IFS=.-: 00:09:38.146 16:25:19 version -- scripts/common.sh@336 -- # read -ra ver1 00:09:38.146 16:25:19 version -- scripts/common.sh@337 -- # IFS=.-: 00:09:38.146 16:25:19 version -- scripts/common.sh@337 -- # read -ra ver2 00:09:38.146 16:25:19 version -- scripts/common.sh@338 -- # local 'op=<' 00:09:38.146 16:25:19 version -- scripts/common.sh@340 -- # ver1_l=2 00:09:38.146 16:25:19 version -- scripts/common.sh@341 -- # ver2_l=1 00:09:38.146 16:25:19 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:38.146 16:25:19 version -- scripts/common.sh@344 -- # case "$op" in 00:09:38.146 16:25:19 version -- scripts/common.sh@345 -- # : 1 00:09:38.146 16:25:19 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:38.146 16:25:19 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:38.146 16:25:19 version -- scripts/common.sh@365 -- # decimal 1 00:09:38.146 16:25:19 version -- scripts/common.sh@353 -- # local d=1 00:09:38.146 16:25:19 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:38.146 16:25:19 version -- scripts/common.sh@355 -- # echo 1 00:09:38.146 16:25:19 version -- scripts/common.sh@365 -- # ver1[v]=1 00:09:38.146 16:25:19 version -- scripts/common.sh@366 -- # decimal 2 00:09:38.146 16:25:19 version -- scripts/common.sh@353 -- # local d=2 00:09:38.146 16:25:19 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:38.146 16:25:19 version -- scripts/common.sh@355 -- # echo 2 00:09:38.405 16:25:19 version -- scripts/common.sh@366 -- # ver2[v]=2 00:09:38.405 16:25:19 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:38.405 16:25:19 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:38.405 16:25:19 version -- scripts/common.sh@368 -- # return 0 00:09:38.405 16:25:19 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:38.405 16:25:19 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:38.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.405 --rc genhtml_branch_coverage=1 00:09:38.405 --rc genhtml_function_coverage=1 00:09:38.405 --rc genhtml_legend=1 00:09:38.405 --rc geninfo_all_blocks=1 00:09:38.405 --rc geninfo_unexecuted_blocks=1 00:09:38.405 00:09:38.405 ' 00:09:38.405 16:25:19 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:38.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.405 --rc genhtml_branch_coverage=1 00:09:38.405 --rc genhtml_function_coverage=1 00:09:38.405 --rc genhtml_legend=1 00:09:38.405 --rc geninfo_all_blocks=1 00:09:38.405 --rc geninfo_unexecuted_blocks=1 00:09:38.405 00:09:38.405 ' 00:09:38.406 16:25:19 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:38.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.406 --rc genhtml_branch_coverage=1 00:09:38.406 --rc genhtml_function_coverage=1 00:09:38.406 --rc genhtml_legend=1 00:09:38.406 --rc geninfo_all_blocks=1 00:09:38.406 --rc geninfo_unexecuted_blocks=1 00:09:38.406 00:09:38.406 ' 00:09:38.406 16:25:19 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:38.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.406 --rc genhtml_branch_coverage=1 00:09:38.406 --rc genhtml_function_coverage=1 00:09:38.406 --rc genhtml_legend=1 00:09:38.406 --rc geninfo_all_blocks=1 00:09:38.406 --rc geninfo_unexecuted_blocks=1 00:09:38.406 00:09:38.406 ' 00:09:38.406 16:25:19 version -- app/version.sh@17 -- # get_header_version major 00:09:38.406 16:25:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:38.406 16:25:19 version -- app/version.sh@14 -- # cut -f2 00:09:38.406 16:25:19 version -- app/version.sh@14 -- # tr -d '"' 00:09:38.406 16:25:19 version -- app/version.sh@17 -- # major=25 00:09:38.406 16:25:19 version -- app/version.sh@18 -- # get_header_version minor 00:09:38.406 16:25:20 version -- app/version.sh@14 -- # tr -d '"' 00:09:38.406 16:25:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:38.406 16:25:20 version -- app/version.sh@14 -- # cut -f2 00:09:38.406 16:25:20 version -- app/version.sh@18 -- # minor=1 00:09:38.406 16:25:20 version -- app/version.sh@19 -- # get_header_version patch 00:09:38.406 16:25:20 version -- app/version.sh@14 -- # tr -d '"' 00:09:38.406 16:25:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:38.406 16:25:20 version -- app/version.sh@14 -- # cut -f2 00:09:38.406 16:25:20 version -- app/version.sh@19 -- # patch=0 00:09:38.406 16:25:20 version -- app/version.sh@20 -- # get_header_version suffix 00:09:38.406 16:25:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:38.406 16:25:20 version -- app/version.sh@14 -- # cut -f2 00:09:38.406 16:25:20 version -- app/version.sh@14 -- # tr -d '"' 00:09:38.406 16:25:20 version -- app/version.sh@20 -- # suffix=-pre 00:09:38.406 16:25:20 version -- app/version.sh@22 -- # version=25.1 00:09:38.406 16:25:20 version -- app/version.sh@25 -- # (( patch != 0 )) 00:09:38.406 16:25:20 version -- app/version.sh@28 -- # version=25.1rc0 00:09:38.406 16:25:20 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:38.406 16:25:20 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:38.406 16:25:20 version -- app/version.sh@30 -- # py_version=25.1rc0 00:09:38.406 16:25:20 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:09:38.406 00:09:38.406 real 0m0.306s 00:09:38.406 user 0m0.187s 00:09:38.406 sys 0m0.165s 00:09:38.406 ************************************ 00:09:38.406 END TEST version 00:09:38.406 ************************************ 00:09:38.406 16:25:20 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.406 16:25:20 version -- common/autotest_common.sh@10 -- # set +x 00:09:38.406 16:25:20 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:09:38.406 16:25:20 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:09:38.406 16:25:20 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:09:38.406 16:25:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:38.406 16:25:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.406 16:25:20 -- common/autotest_common.sh@10 -- # set +x 00:09:38.406 ************************************ 00:09:38.406 START TEST bdev_raid 00:09:38.406 ************************************ 00:09:38.406 16:25:20 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:09:38.406 * Looking for test storage... 00:09:38.406 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:09:38.406 16:25:20 bdev_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:38.406 16:25:20 bdev_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:09:38.406 16:25:20 bdev_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:38.665 16:25:20 bdev_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:38.665 16:25:20 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:38.665 16:25:20 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:38.665 16:25:20 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:38.665 16:25:20 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:09:38.665 16:25:20 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:09:38.665 16:25:20 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:09:38.665 16:25:20 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:09:38.665 16:25:20 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:09:38.665 16:25:20 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:09:38.666 16:25:20 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:09:38.666 16:25:20 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:38.666 16:25:20 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:09:38.666 16:25:20 bdev_raid -- scripts/common.sh@345 -- # : 1 00:09:38.666 16:25:20 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:38.666 16:25:20 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:38.666 16:25:20 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:09:38.666 16:25:20 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:09:38.666 16:25:20 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:38.666 16:25:20 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:09:38.666 16:25:20 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:09:38.666 16:25:20 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:09:38.666 16:25:20 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:09:38.666 16:25:20 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:38.666 16:25:20 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:09:38.666 16:25:20 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:09:38.666 16:25:20 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:38.666 16:25:20 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:38.666 16:25:20 bdev_raid -- scripts/common.sh@368 -- # return 0 00:09:38.666 16:25:20 bdev_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:38.666 16:25:20 bdev_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:38.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.666 --rc genhtml_branch_coverage=1 00:09:38.666 --rc genhtml_function_coverage=1 00:09:38.666 --rc genhtml_legend=1 00:09:38.666 --rc geninfo_all_blocks=1 00:09:38.666 --rc geninfo_unexecuted_blocks=1 00:09:38.666 00:09:38.666 ' 00:09:38.666 16:25:20 bdev_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:38.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.666 --rc genhtml_branch_coverage=1 00:09:38.666 --rc genhtml_function_coverage=1 00:09:38.666 --rc genhtml_legend=1 00:09:38.666 --rc geninfo_all_blocks=1 00:09:38.666 --rc geninfo_unexecuted_blocks=1 00:09:38.666 00:09:38.666 ' 00:09:38.666 16:25:20 bdev_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:38.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.666 --rc genhtml_branch_coverage=1 00:09:38.666 --rc genhtml_function_coverage=1 00:09:38.666 --rc genhtml_legend=1 00:09:38.666 --rc geninfo_all_blocks=1 00:09:38.666 --rc geninfo_unexecuted_blocks=1 00:09:38.666 00:09:38.666 ' 00:09:38.666 16:25:20 bdev_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:38.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.666 --rc genhtml_branch_coverage=1 00:09:38.666 --rc genhtml_function_coverage=1 00:09:38.666 --rc genhtml_legend=1 00:09:38.666 --rc geninfo_all_blocks=1 00:09:38.666 --rc geninfo_unexecuted_blocks=1 00:09:38.666 00:09:38.666 ' 00:09:38.666 16:25:20 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:38.666 16:25:20 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:09:38.666 16:25:20 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:09:38.666 16:25:20 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:09:38.666 16:25:20 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:09:38.666 16:25:20 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:09:38.666 16:25:20 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:09:38.666 16:25:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:38.666 16:25:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.666 16:25:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:38.666 ************************************ 00:09:38.666 START TEST raid1_resize_data_offset_test 00:09:38.666 ************************************ 00:09:38.666 16:25:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:09:38.666 16:25:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=71950 00:09:38.666 16:25:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 71950' 00:09:38.666 Process raid pid: 71950 00:09:38.666 16:25:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:38.666 16:25:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 71950 00:09:38.666 16:25:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 71950 ']' 00:09:38.666 16:25:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.666 16:25:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:38.666 16:25:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.666 16:25:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:38.666 16:25:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.666 [2024-12-06 16:25:20.475390] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:09:38.666 [2024-12-06 16:25:20.475626] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:38.926 [2024-12-06 16:25:20.648843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.926 [2024-12-06 16:25:20.675056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.926 [2024-12-06 16:25:20.718886] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:38.926 [2024-12-06 16:25:20.719010] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:39.865 16:25:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:39.865 16:25:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:09:39.865 16:25:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:09:39.865 16:25:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.865 16:25:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.865 malloc0 00:09:39.865 16:25:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.865 16:25:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:09:39.865 16:25:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.865 16:25:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.865 malloc1 00:09:39.865 16:25:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.865 16:25:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:09:39.865 16:25:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.865 16:25:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.865 null0 00:09:39.865 16:25:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.865 16:25:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:09:39.865 16:25:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.865 16:25:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.865 [2024-12-06 16:25:21.441740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:09:39.865 [2024-12-06 16:25:21.443661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:39.865 [2024-12-06 16:25:21.443709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:09:39.865 [2024-12-06 16:25:21.443828] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:09:39.865 [2024-12-06 16:25:21.443839] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:09:39.865 [2024-12-06 16:25:21.444088] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:09:39.865 [2024-12-06 16:25:21.444237] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:09:39.865 [2024-12-06 16:25:21.444251] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:09:39.865 [2024-12-06 16:25:21.444421] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:39.865 16:25:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.865 16:25:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.865 16:25:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:09:39.865 16:25:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.865 16:25:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.865 16:25:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.865 16:25:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:09:39.865 16:25:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:09:39.865 16:25:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.865 16:25:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.865 [2024-12-06 16:25:21.501586] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:09:39.865 16:25:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.865 16:25:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:09:39.865 16:25:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.865 16:25:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.865 malloc2 00:09:39.865 16:25:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.865 16:25:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:09:39.865 16:25:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.865 16:25:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.865 [2024-12-06 16:25:21.631464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:39.865 [2024-12-06 16:25:21.636797] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:39.865 16:25:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.865 [2024-12-06 16:25:21.638760] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:09:39.865 16:25:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.865 16:25:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.865 16:25:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.865 16:25:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:09:39.865 16:25:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.865 16:25:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:09:39.865 16:25:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 71950 00:09:39.865 16:25:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 71950 ']' 00:09:39.865 16:25:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 71950 00:09:39.865 16:25:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:09:39.865 16:25:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:39.865 16:25:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71950 00:09:40.125 16:25:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:40.125 killing process with pid 71950 00:09:40.125 16:25:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:40.125 16:25:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71950' 00:09:40.125 16:25:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 71950 00:09:40.125 16:25:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 71950 00:09:40.125 [2024-12-06 16:25:21.733610] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:40.125 [2024-12-06 16:25:21.735355] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:09:40.125 [2024-12-06 16:25:21.735416] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:40.125 [2024-12-06 16:25:21.735434] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:09:40.125 [2024-12-06 16:25:21.741990] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:40.125 [2024-12-06 16:25:21.742300] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:40.125 [2024-12-06 16:25:21.742318] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:09:40.125 [2024-12-06 16:25:21.960113] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:40.383 16:25:22 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:09:40.383 00:09:40.383 real 0m1.782s 00:09:40.383 user 0m1.823s 00:09:40.383 sys 0m0.456s 00:09:40.383 16:25:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.383 ************************************ 00:09:40.383 END TEST raid1_resize_data_offset_test 00:09:40.383 ************************************ 00:09:40.383 16:25:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.643 16:25:22 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:09:40.643 16:25:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:40.643 16:25:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.643 16:25:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:40.643 ************************************ 00:09:40.643 START TEST raid0_resize_superblock_test 00:09:40.643 ************************************ 00:09:40.643 16:25:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:09:40.643 16:25:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:09:40.643 16:25:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=72001 00:09:40.643 Process raid pid: 72001 00:09:40.643 16:25:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:40.643 16:25:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 72001' 00:09:40.643 16:25:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 72001 00:09:40.643 16:25:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72001 ']' 00:09:40.643 16:25:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.643 16:25:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:40.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.643 16:25:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.643 16:25:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:40.643 16:25:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.643 [2024-12-06 16:25:22.319694] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:09:40.643 [2024-12-06 16:25:22.319829] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:40.902 [2024-12-06 16:25:22.492529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.902 [2024-12-06 16:25:22.522930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.902 [2024-12-06 16:25:22.567788] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:40.902 [2024-12-06 16:25:22.567835] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:41.527 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:41.527 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:41.527 16:25:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:09:41.527 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.527 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.527 malloc0 00:09:41.527 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.527 16:25:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:09:41.527 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.527 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.527 [2024-12-06 16:25:23.293198] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:09:41.527 [2024-12-06 16:25:23.293267] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:41.527 [2024-12-06 16:25:23.293288] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:41.527 [2024-12-06 16:25:23.293299] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:41.527 [2024-12-06 16:25:23.295428] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:41.527 [2024-12-06 16:25:23.295534] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:09:41.527 pt0 00:09:41.527 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.527 16:25:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:09:41.527 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.527 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.786 82959ea4-92a9-464d-9307-36bb82472be5 00:09:41.786 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.786 16:25:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:09:41.786 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.786 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.786 1f0f18af-e9a6-48ab-a627-bcd38348a3b7 00:09:41.786 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.786 16:25:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:09:41.786 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.786 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.786 0f63c98a-0080-40e5-b4ae-924fe6974732 00:09:41.786 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.786 16:25:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:09:41.786 16:25:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:09:41.786 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.786 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.786 [2024-12-06 16:25:23.432099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 1f0f18af-e9a6-48ab-a627-bcd38348a3b7 is claimed 00:09:41.786 [2024-12-06 16:25:23.432244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 0f63c98a-0080-40e5-b4ae-924fe6974732 is claimed 00:09:41.786 [2024-12-06 16:25:23.432371] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:09:41.786 [2024-12-06 16:25:23.432392] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:09:41.786 [2024-12-06 16:25:23.432723] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:41.786 [2024-12-06 16:25:23.432943] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:09:41.786 [2024-12-06 16:25:23.432958] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:09:41.786 [2024-12-06 16:25:23.433118] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:41.786 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.786 16:25:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:09:41.786 16:25:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:09:41.786 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.786 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.786 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.786 16:25:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:09:41.786 16:25:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:09:41.787 16:25:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:09:41.787 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.787 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.787 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.787 16:25:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:09:41.787 16:25:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:41.787 16:25:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:41.787 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.787 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.787 16:25:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:41.787 16:25:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:09:41.787 [2024-12-06 16:25:23.548194] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:41.787 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.787 16:25:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:41.787 16:25:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:41.787 16:25:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:09:41.787 16:25:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:09:41.787 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.787 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.787 [2024-12-06 16:25:23.600051] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:41.787 [2024-12-06 16:25:23.600090] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '1f0f18af-e9a6-48ab-a627-bcd38348a3b7' was resized: old size 131072, new size 204800 00:09:41.787 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.787 16:25:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:09:41.787 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.787 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.787 [2024-12-06 16:25:23.611944] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:41.787 [2024-12-06 16:25:23.611975] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '0f63c98a-0080-40e5-b4ae-924fe6974732' was resized: old size 131072, new size 204800 00:09:41.787 [2024-12-06 16:25:23.612012] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:09:41.787 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.787 16:25:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:09:41.787 16:25:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:09:41.787 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.787 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.045 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.045 16:25:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:09:42.045 16:25:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:09:42.045 16:25:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:09:42.045 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.045 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.045 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.046 16:25:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:09:42.046 16:25:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:42.046 16:25:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:42.046 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.046 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.046 16:25:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:42.046 16:25:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:09:42.046 [2024-12-06 16:25:23.719834] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:42.046 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.046 16:25:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:42.046 16:25:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:42.046 16:25:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:09:42.046 16:25:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:09:42.046 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.046 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.046 [2024-12-06 16:25:23.767534] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:09:42.046 [2024-12-06 16:25:23.767639] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:09:42.046 [2024-12-06 16:25:23.767652] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:42.046 [2024-12-06 16:25:23.767666] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:09:42.046 [2024-12-06 16:25:23.767809] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:42.046 [2024-12-06 16:25:23.767851] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:42.046 [2024-12-06 16:25:23.767865] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:09:42.046 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.046 16:25:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:09:42.046 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.046 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.046 [2024-12-06 16:25:23.775473] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:09:42.046 [2024-12-06 16:25:23.775541] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:42.046 [2024-12-06 16:25:23.775562] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:09:42.046 [2024-12-06 16:25:23.775572] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:42.046 [2024-12-06 16:25:23.777974] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:42.046 [2024-12-06 16:25:23.778018] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:09:42.046 [2024-12-06 16:25:23.779651] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 1f0f18af-e9a6-48ab-a627-bcd38348a3b7 00:09:42.046 [2024-12-06 16:25:23.779705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 1f0f18af-e9a6-48ab-a627-bcd38348a3b7 is claimed 00:09:42.046 [2024-12-06 16:25:23.779799] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 0f63c98a-0080-40e5-b4ae-924fe6974732 00:09:42.046 [2024-12-06 16:25:23.779820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 0f63c98a-0080-40e5-b4ae-924fe6974732 is claimed 00:09:42.046 [2024-12-06 16:25:23.779952] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 0f63c98a-0080-40e5-b4ae-924fe6974732 (2) smaller than existing raid bdev Raid (3) 00:09:42.046 [2024-12-06 16:25:23.779975] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 1f0f18af-e9a6-48ab-a627-bcd38348a3b7: File exists 00:09:42.046 [2024-12-06 16:25:23.780015] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:09:42.046 [2024-12-06 16:25:23.780025] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:09:42.046 pt0 00:09:42.046 [2024-12-06 16:25:23.780289] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:42.046 [2024-12-06 16:25:23.780430] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:09:42.046 [2024-12-06 16:25:23.780440] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006600 00:09:42.046 [2024-12-06 16:25:23.780603] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:42.046 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.046 16:25:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:09:42.046 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.046 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.046 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.046 16:25:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:42.046 16:25:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:42.046 16:25:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:42.046 16:25:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:09:42.046 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.046 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.046 [2024-12-06 16:25:23.796002] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:42.046 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.046 16:25:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:42.046 16:25:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:42.046 16:25:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:09:42.046 16:25:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 72001 00:09:42.046 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72001 ']' 00:09:42.046 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72001 00:09:42.046 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:42.046 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:42.046 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72001 00:09:42.046 killing process with pid 72001 00:09:42.046 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:42.046 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:42.046 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72001' 00:09:42.046 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 72001 00:09:42.046 [2024-12-06 16:25:23.882622] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:42.046 [2024-12-06 16:25:23.882729] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:42.046 16:25:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 72001 00:09:42.046 [2024-12-06 16:25:23.882786] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:42.046 [2024-12-06 16:25:23.882796] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Raid, state offline 00:09:42.306 [2024-12-06 16:25:24.046523] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:42.564 16:25:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:09:42.564 00:09:42.564 real 0m2.035s 00:09:42.564 user 0m2.339s 00:09:42.564 sys 0m0.504s 00:09:42.564 16:25:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.564 16:25:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.564 ************************************ 00:09:42.564 END TEST raid0_resize_superblock_test 00:09:42.564 ************************************ 00:09:42.564 16:25:24 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:09:42.564 16:25:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:42.564 16:25:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.564 16:25:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:42.564 ************************************ 00:09:42.564 START TEST raid1_resize_superblock_test 00:09:42.564 ************************************ 00:09:42.564 16:25:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:09:42.564 16:25:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:09:42.564 16:25:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=72072 00:09:42.564 16:25:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:42.564 16:25:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 72072' 00:09:42.564 Process raid pid: 72072 00:09:42.564 16:25:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 72072 00:09:42.564 16:25:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72072 ']' 00:09:42.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.564 16:25:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.564 16:25:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:42.564 16:25:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.564 16:25:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:42.564 16:25:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.823 [2024-12-06 16:25:24.427723] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:09:42.823 [2024-12-06 16:25:24.427947] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:42.823 [2024-12-06 16:25:24.600719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.823 [2024-12-06 16:25:24.630856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.081 [2024-12-06 16:25:24.675390] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:43.081 [2024-12-06 16:25:24.675521] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:43.647 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:43.647 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:43.647 16:25:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:09:43.647 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.647 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.647 malloc0 00:09:43.647 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.647 16:25:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:09:43.647 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.647 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.647 [2024-12-06 16:25:25.436893] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:09:43.647 [2024-12-06 16:25:25.436969] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.647 [2024-12-06 16:25:25.436996] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:43.647 [2024-12-06 16:25:25.437010] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.647 [2024-12-06 16:25:25.439567] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.647 [2024-12-06 16:25:25.439616] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:09:43.647 pt0 00:09:43.647 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.647 16:25:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:09:43.647 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.647 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.904 124ea14f-cf48-4122-9f04-04614d5e1104 00:09:43.905 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.905 16:25:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:09:43.905 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.905 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.905 97a664cb-8d42-46f9-bfb7-3b69907f6335 00:09:43.905 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.905 16:25:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:09:43.905 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.905 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.905 4bd2caf0-a5ed-46e8-85b9-514554612ae8 00:09:43.905 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.905 16:25:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:09:43.905 16:25:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:09:43.905 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.905 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.905 [2024-12-06 16:25:25.576931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 97a664cb-8d42-46f9-bfb7-3b69907f6335 is claimed 00:09:43.905 [2024-12-06 16:25:25.577150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 4bd2caf0-a5ed-46e8-85b9-514554612ae8 is claimed 00:09:43.905 [2024-12-06 16:25:25.577472] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:09:43.905 [2024-12-06 16:25:25.577548] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:09:43.905 [2024-12-06 16:25:25.577903] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:43.905 [2024-12-06 16:25:25.578156] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:09:43.905 [2024-12-06 16:25:25.578222] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:09:43.905 [2024-12-06 16:25:25.578434] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:43.905 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.905 16:25:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:09:43.905 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.905 16:25:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:09:43.905 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.905 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.905 16:25:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:09:43.905 16:25:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:09:43.905 16:25:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:09:43.905 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.905 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.905 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.905 16:25:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:09:43.905 16:25:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:43.905 16:25:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:09:43.905 16:25:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:43.905 16:25:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:43.905 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.905 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.905 [2024-12-06 16:25:25.697059] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:43.905 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.905 16:25:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:43.905 16:25:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:43.905 16:25:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:09:43.905 16:25:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:09:43.905 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.905 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.163 [2024-12-06 16:25:25.744961] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:44.163 [2024-12-06 16:25:25.744996] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '97a664cb-8d42-46f9-bfb7-3b69907f6335' was resized: old size 131072, new size 204800 00:09:44.163 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.163 16:25:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:09:44.163 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.163 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.163 [2024-12-06 16:25:25.756919] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:44.163 [2024-12-06 16:25:25.757004] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '4bd2caf0-a5ed-46e8-85b9-514554612ae8' was resized: old size 131072, new size 204800 00:09:44.163 [2024-12-06 16:25:25.757044] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:09:44.163 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.163 16:25:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:09:44.163 16:25:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:09:44.163 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.163 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.163 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.163 16:25:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:09:44.163 16:25:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:09:44.163 16:25:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:09:44.163 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.163 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.164 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.164 16:25:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:09:44.164 16:25:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:44.164 16:25:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:44.164 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.164 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.164 16:25:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:44.164 16:25:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:09:44.164 [2024-12-06 16:25:25.868848] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:44.164 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.164 16:25:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:44.164 16:25:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:44.164 16:25:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:09:44.164 16:25:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:09:44.164 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.164 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.164 [2024-12-06 16:25:25.916488] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:09:44.164 [2024-12-06 16:25:25.916595] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:09:44.164 [2024-12-06 16:25:25.916639] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:09:44.164 [2024-12-06 16:25:25.916827] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:44.164 [2024-12-06 16:25:25.917004] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:44.164 [2024-12-06 16:25:25.917071] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:44.164 [2024-12-06 16:25:25.917085] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:09:44.164 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.164 16:25:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:09:44.164 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.164 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.164 [2024-12-06 16:25:25.928418] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:09:44.164 [2024-12-06 16:25:25.928486] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:44.164 [2024-12-06 16:25:25.928508] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:09:44.164 [2024-12-06 16:25:25.928531] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:44.164 [2024-12-06 16:25:25.930832] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:44.164 [2024-12-06 16:25:25.930874] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:09:44.164 [2024-12-06 16:25:25.932467] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 97a664cb-8d42-46f9-bfb7-3b69907f6335 00:09:44.164 [2024-12-06 16:25:25.932647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 97a664cb-8d42-46f9-bfb7-3b69907f6335 is claimed 00:09:44.164 [2024-12-06 16:25:25.932766] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 4bd2caf0-a5ed-46e8-85b9-514554612ae8 00:09:44.164 [2024-12-06 16:25:25.932792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 4bd2caf0-a5ed-46e8-85b9-514554612ae8 is claimed 00:09:44.164 [2024-12-06 16:25:25.932932] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 4bd2caf0-a5ed-46e8-85b9-514554612ae8 (2) smaller than existing raid bdev Raid (3) 00:09:44.164 [2024-12-06 16:25:25.932956] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 97a664cb-8d42-46f9-bfb7-3b69907f6335: File exists 00:09:44.164 [2024-12-06 16:25:25.932999] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:09:44.164 [2024-12-06 16:25:25.933010] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:44.164 pt0 00:09:44.164 [2024-12-06 16:25:25.933292] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:44.164 [2024-12-06 16:25:25.933442] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:09:44.164 [2024-12-06 16:25:25.933460] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006600 00:09:44.164 [2024-12-06 16:25:25.933596] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:44.164 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.164 16:25:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:09:44.164 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.164 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.164 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.164 16:25:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:44.164 16:25:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:44.164 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.164 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.164 16:25:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:44.164 16:25:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:09:44.164 [2024-12-06 16:25:25.952903] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:44.164 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.164 16:25:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:44.164 16:25:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:44.164 16:25:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:09:44.164 16:25:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 72072 00:09:44.164 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72072 ']' 00:09:44.164 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72072 00:09:44.164 16:25:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:44.423 16:25:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:44.423 16:25:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72072 00:09:44.423 killing process with pid 72072 00:09:44.423 16:25:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:44.423 16:25:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:44.423 16:25:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72072' 00:09:44.423 16:25:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 72072 00:09:44.423 [2024-12-06 16:25:26.026289] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:44.423 [2024-12-06 16:25:26.026389] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:44.423 [2024-12-06 16:25:26.026446] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:44.423 [2024-12-06 16:25:26.026456] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Raid, state offline 00:09:44.423 16:25:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 72072 00:09:44.423 [2024-12-06 16:25:26.189898] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:44.682 16:25:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:09:44.682 00:09:44.682 real 0m2.054s 00:09:44.682 user 0m2.394s 00:09:44.682 sys 0m0.495s 00:09:44.682 16:25:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.682 16:25:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.682 ************************************ 00:09:44.682 END TEST raid1_resize_superblock_test 00:09:44.682 ************************************ 00:09:44.682 16:25:26 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:09:44.682 16:25:26 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:09:44.682 16:25:26 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:09:44.682 16:25:26 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:09:44.682 16:25:26 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:09:44.682 16:25:26 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:09:44.682 16:25:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:44.682 16:25:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.682 16:25:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:44.682 ************************************ 00:09:44.682 START TEST raid_function_test_raid0 00:09:44.682 ************************************ 00:09:44.682 16:25:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:09:44.682 16:25:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:09:44.682 16:25:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:09:44.683 16:25:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:09:44.683 16:25:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=72147 00:09:44.683 Process raid pid: 72147 00:09:44.683 16:25:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 72147' 00:09:44.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.683 16:25:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 72147 00:09:44.683 16:25:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 72147 ']' 00:09:44.683 16:25:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.683 16:25:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:44.683 16:25:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.683 16:25:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:44.683 16:25:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:44.683 16:25:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:44.942 [2024-12-06 16:25:26.559144] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:09:44.942 [2024-12-06 16:25:26.559288] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:44.942 [2024-12-06 16:25:26.729269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.942 [2024-12-06 16:25:26.757712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.200 [2024-12-06 16:25:26.802316] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:45.200 [2024-12-06 16:25:26.802360] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:45.766 16:25:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:45.766 16:25:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:09:45.766 16:25:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:09:45.766 16:25:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.766 16:25:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:45.766 Base_1 00:09:45.766 16:25:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.766 16:25:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:09:45.766 16:25:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.766 16:25:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:45.766 Base_2 00:09:45.766 16:25:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.766 16:25:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:09:45.766 16:25:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.766 16:25:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:45.766 [2024-12-06 16:25:27.444690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:45.766 [2024-12-06 16:25:27.446646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:45.766 [2024-12-06 16:25:27.446799] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:09:45.766 [2024-12-06 16:25:27.446819] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:45.766 [2024-12-06 16:25:27.447151] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:45.766 [2024-12-06 16:25:27.447311] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:09:45.766 [2024-12-06 16:25:27.447322] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000006280 00:09:45.766 [2024-12-06 16:25:27.447506] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:45.766 16:25:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.766 16:25:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:45.766 16:25:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.766 16:25:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:45.766 16:25:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:09:45.766 16:25:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.766 16:25:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:09:45.766 16:25:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:09:45.766 16:25:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:09:45.766 16:25:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:09:45.766 16:25:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:09:45.766 16:25:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:45.766 16:25:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:09:45.766 16:25:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:45.766 16:25:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:09:45.766 16:25:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:45.766 16:25:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:45.766 16:25:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:09:46.074 [2024-12-06 16:25:27.672336] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:46.074 /dev/nbd0 00:09:46.074 16:25:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:46.074 16:25:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:46.074 16:25:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:46.074 16:25:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:09:46.074 16:25:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:46.074 16:25:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:46.074 16:25:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:46.074 16:25:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:09:46.074 16:25:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:46.074 16:25:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:46.074 16:25:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:46.074 1+0 records in 00:09:46.074 1+0 records out 00:09:46.074 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000387795 s, 10.6 MB/s 00:09:46.074 16:25:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:46.074 16:25:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:09:46.074 16:25:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:46.074 16:25:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:46.074 16:25:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:09:46.074 16:25:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:46.074 16:25:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:46.074 16:25:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:09:46.074 16:25:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:09:46.074 16:25:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:09:46.344 16:25:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:46.344 { 00:09:46.344 "nbd_device": "/dev/nbd0", 00:09:46.344 "bdev_name": "raid" 00:09:46.344 } 00:09:46.344 ]' 00:09:46.344 16:25:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:46.344 16:25:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:46.344 { 00:09:46.344 "nbd_device": "/dev/nbd0", 00:09:46.344 "bdev_name": "raid" 00:09:46.344 } 00:09:46.344 ]' 00:09:46.344 16:25:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:09:46.344 16:25:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:09:46.344 16:25:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:46.344 16:25:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:09:46.344 16:25:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:09:46.344 16:25:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:09:46.344 16:25:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:09:46.344 16:25:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:09:46.344 16:25:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:09:46.344 16:25:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:09:46.344 16:25:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:09:46.344 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:09:46.344 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:09:46.344 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:09:46.344 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:09:46.344 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:09:46.344 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:09:46.344 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:09:46.344 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:09:46.344 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:09:46.344 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:09:46.344 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:09:46.344 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:09:46.344 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:09:46.344 4096+0 records in 00:09:46.344 4096+0 records out 00:09:46.344 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0339515 s, 61.8 MB/s 00:09:46.344 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:09:46.601 4096+0 records in 00:09:46.601 4096+0 records out 00:09:46.601 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.199231 s, 10.5 MB/s 00:09:46.601 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:09:46.601 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:46.601 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:09:46.601 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:46.601 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:09:46.601 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:09:46.601 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:09:46.601 128+0 records in 00:09:46.601 128+0 records out 00:09:46.601 65536 bytes (66 kB, 64 KiB) copied, 0.00134914 s, 48.6 MB/s 00:09:46.601 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:09:46.601 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:09:46.602 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:46.602 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:09:46.602 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:46.602 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:09:46.602 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:09:46.602 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:09:46.602 2035+0 records in 00:09:46.602 2035+0 records out 00:09:46.602 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.014941 s, 69.7 MB/s 00:09:46.602 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:09:46.602 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:09:46.602 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:46.602 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:09:46.602 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:46.602 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:09:46.602 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:09:46.602 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:09:46.602 456+0 records in 00:09:46.602 456+0 records out 00:09:46.602 233472 bytes (233 kB, 228 KiB) copied, 0.00396563 s, 58.9 MB/s 00:09:46.602 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:09:46.602 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:09:46.602 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:46.602 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:09:46.602 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:46.602 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:09:46.602 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:09:46.602 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:09:46.602 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:46.602 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:46.602 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:09:46.602 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:46.602 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:09:46.858 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:46.858 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:46.858 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:46.858 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:46.858 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:46.858 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:46.858 [2024-12-06 16:25:28.573830] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:46.858 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:09:46.858 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:09:46.858 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:09:46.858 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:09:46.858 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:09:47.116 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:47.116 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:47.116 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:47.116 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:47.116 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:47.116 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:47.116 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:09:47.116 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:09:47.116 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:47.116 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:09:47.116 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:09:47.116 16:25:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 72147 00:09:47.116 16:25:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 72147 ']' 00:09:47.116 16:25:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 72147 00:09:47.116 16:25:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:09:47.116 16:25:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:47.116 16:25:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72147 00:09:47.116 killing process with pid 72147 00:09:47.116 16:25:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:47.116 16:25:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:47.116 16:25:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72147' 00:09:47.116 16:25:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 72147 00:09:47.116 [2024-12-06 16:25:28.903631] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:47.116 16:25:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 72147 00:09:47.116 [2024-12-06 16:25:28.903740] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:47.116 [2024-12-06 16:25:28.903791] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:47.116 [2024-12-06 16:25:28.903814] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid, state offline 00:09:47.116 [2024-12-06 16:25:28.927623] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:47.374 ************************************ 00:09:47.374 END TEST raid_function_test_raid0 00:09:47.374 ************************************ 00:09:47.374 16:25:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:09:47.374 00:09:47.374 real 0m2.668s 00:09:47.374 user 0m3.315s 00:09:47.374 sys 0m0.892s 00:09:47.374 16:25:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:47.374 16:25:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:47.374 16:25:29 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:09:47.374 16:25:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:47.374 16:25:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.374 16:25:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:47.374 ************************************ 00:09:47.374 START TEST raid_function_test_concat 00:09:47.374 ************************************ 00:09:47.374 16:25:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:09:47.374 16:25:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:09:47.374 16:25:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:09:47.374 16:25:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:09:47.374 16:25:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=72259 00:09:47.374 16:25:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 72259' 00:09:47.374 Process raid pid: 72259 00:09:47.374 16:25:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 72259 00:09:47.374 16:25:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 72259 ']' 00:09:47.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.374 16:25:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.374 16:25:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:47.374 16:25:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.374 16:25:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:47.374 16:25:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:47.374 16:25:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:47.631 [2024-12-06 16:25:29.290748] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:09:47.631 [2024-12-06 16:25:29.291335] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.631 [2024-12-06 16:25:29.461896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.889 [2024-12-06 16:25:29.491030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.889 [2024-12-06 16:25:29.535272] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:47.889 [2024-12-06 16:25:29.535397] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:48.456 16:25:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:48.456 16:25:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:09:48.456 16:25:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:09:48.456 16:25:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.456 16:25:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:48.456 Base_1 00:09:48.456 16:25:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.456 16:25:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:09:48.456 16:25:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.456 16:25:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:48.456 Base_2 00:09:48.456 16:25:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.456 16:25:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:09:48.456 16:25:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.456 16:25:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:48.456 [2024-12-06 16:25:30.174237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:48.456 [2024-12-06 16:25:30.176543] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:48.456 [2024-12-06 16:25:30.176697] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:09:48.456 [2024-12-06 16:25:30.176761] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:48.456 [2024-12-06 16:25:30.177118] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:48.456 [2024-12-06 16:25:30.177331] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:09:48.456 [2024-12-06 16:25:30.177384] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000006280 00:09:48.456 [2024-12-06 16:25:30.177574] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:48.456 16:25:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.456 16:25:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:48.456 16:25:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:09:48.456 16:25:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.456 16:25:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:48.456 16:25:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.456 16:25:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:09:48.456 16:25:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:09:48.456 16:25:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:09:48.456 16:25:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:09:48.456 16:25:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:09:48.456 16:25:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:48.456 16:25:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:09:48.456 16:25:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:48.456 16:25:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:09:48.456 16:25:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:48.456 16:25:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:48.456 16:25:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:09:48.714 [2024-12-06 16:25:30.437805] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:48.714 /dev/nbd0 00:09:48.714 16:25:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:48.714 16:25:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:48.714 16:25:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:48.714 16:25:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:09:48.714 16:25:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:48.714 16:25:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:48.714 16:25:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:48.714 16:25:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:09:48.714 16:25:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:48.714 16:25:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:48.714 16:25:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:48.714 1+0 records in 00:09:48.714 1+0 records out 00:09:48.714 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000576418 s, 7.1 MB/s 00:09:48.714 16:25:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:48.714 16:25:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:09:48.714 16:25:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:48.715 16:25:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:48.715 16:25:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:09:48.715 16:25:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:48.715 16:25:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:48.715 16:25:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:09:48.715 16:25:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:09:48.715 16:25:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:09:48.973 16:25:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:48.973 { 00:09:48.973 "nbd_device": "/dev/nbd0", 00:09:48.973 "bdev_name": "raid" 00:09:48.973 } 00:09:48.973 ]' 00:09:48.973 16:25:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:48.973 { 00:09:48.973 "nbd_device": "/dev/nbd0", 00:09:48.973 "bdev_name": "raid" 00:09:48.973 } 00:09:48.973 ]' 00:09:48.973 16:25:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:48.973 16:25:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:09:48.973 16:25:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:09:48.973 16:25:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:48.973 16:25:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:09:48.973 16:25:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:09:48.973 16:25:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:09:48.973 16:25:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:09:48.973 16:25:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:09:48.973 16:25:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:09:48.973 16:25:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:09:48.973 16:25:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:09:48.973 16:25:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:09:48.973 16:25:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:09:48.973 16:25:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:09:49.231 16:25:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:09:49.231 16:25:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:09:49.231 16:25:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:09:49.231 16:25:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:09:49.231 16:25:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:09:49.231 16:25:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:09:49.231 16:25:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:09:49.231 16:25:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:09:49.231 16:25:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:09:49.231 16:25:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:09:49.231 4096+0 records in 00:09:49.231 4096+0 records out 00:09:49.231 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0357394 s, 58.7 MB/s 00:09:49.231 16:25:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:09:49.231 4096+0 records in 00:09:49.231 4096+0 records out 00:09:49.231 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.20717 s, 10.1 MB/s 00:09:49.490 16:25:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:09:49.490 16:25:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:49.490 16:25:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:09:49.490 16:25:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:49.490 16:25:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:09:49.490 16:25:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:09:49.490 16:25:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:09:49.490 128+0 records in 00:09:49.490 128+0 records out 00:09:49.490 65536 bytes (66 kB, 64 KiB) copied, 0.00115452 s, 56.8 MB/s 00:09:49.490 16:25:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:09:49.490 16:25:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:09:49.491 16:25:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:49.491 16:25:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:09:49.491 16:25:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:49.491 16:25:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:09:49.491 16:25:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:09:49.491 16:25:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:09:49.491 2035+0 records in 00:09:49.491 2035+0 records out 00:09:49.491 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.012853 s, 81.1 MB/s 00:09:49.491 16:25:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:09:49.491 16:25:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:09:49.491 16:25:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:49.491 16:25:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:09:49.491 16:25:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:49.491 16:25:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:09:49.491 16:25:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:09:49.491 16:25:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:09:49.491 456+0 records in 00:09:49.491 456+0 records out 00:09:49.491 233472 bytes (233 kB, 228 KiB) copied, 0.00214306 s, 109 MB/s 00:09:49.491 16:25:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:09:49.491 16:25:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:09:49.491 16:25:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:49.491 16:25:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:09:49.491 16:25:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:49.491 16:25:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:09:49.491 16:25:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:09:49.491 16:25:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:09:49.491 16:25:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:49.491 16:25:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:49.491 16:25:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:09:49.491 16:25:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:49.491 16:25:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:09:49.749 16:25:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:49.749 [2024-12-06 16:25:31.407679] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:49.749 16:25:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:49.749 16:25:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:49.749 16:25:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:49.749 16:25:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:49.749 16:25:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:49.749 16:25:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:09:49.749 16:25:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:09:49.749 16:25:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:09:49.749 16:25:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:09:49.749 16:25:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:09:50.006 16:25:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:50.006 16:25:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:50.006 16:25:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:50.006 16:25:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:50.006 16:25:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:50.006 16:25:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:50.006 16:25:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:09:50.006 16:25:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:09:50.006 16:25:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:50.006 16:25:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:09:50.006 16:25:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:09:50.006 16:25:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 72259 00:09:50.006 16:25:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 72259 ']' 00:09:50.006 16:25:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 72259 00:09:50.006 16:25:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:09:50.006 16:25:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:50.006 16:25:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72259 00:09:50.006 killing process with pid 72259 00:09:50.006 16:25:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:50.006 16:25:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:50.007 16:25:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72259' 00:09:50.007 16:25:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 72259 00:09:50.007 [2024-12-06 16:25:31.750811] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:50.007 [2024-12-06 16:25:31.750934] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:50.007 16:25:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 72259 00:09:50.007 [2024-12-06 16:25:31.750995] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:50.007 [2024-12-06 16:25:31.751009] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid, state offline 00:09:50.007 [2024-12-06 16:25:31.775317] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:50.264 16:25:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:09:50.264 00:09:50.264 real 0m2.790s 00:09:50.264 user 0m3.467s 00:09:50.264 sys 0m0.986s 00:09:50.264 16:25:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:50.264 ************************************ 00:09:50.264 END TEST raid_function_test_concat 00:09:50.264 ************************************ 00:09:50.264 16:25:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:50.264 16:25:32 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:09:50.264 16:25:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:50.264 16:25:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:50.264 16:25:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:50.264 ************************************ 00:09:50.264 START TEST raid0_resize_test 00:09:50.264 ************************************ 00:09:50.264 16:25:32 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:09:50.264 16:25:32 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:09:50.264 16:25:32 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:09:50.264 16:25:32 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:09:50.264 16:25:32 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:09:50.264 16:25:32 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:09:50.264 16:25:32 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:09:50.264 16:25:32 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:09:50.264 16:25:32 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:09:50.265 16:25:32 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=72376 00:09:50.265 Process raid pid: 72376 00:09:50.265 16:25:32 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:50.265 16:25:32 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 72376' 00:09:50.265 16:25:32 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 72376 00:09:50.265 16:25:32 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 72376 ']' 00:09:50.265 16:25:32 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.265 16:25:32 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:50.265 16:25:32 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.265 16:25:32 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:50.265 16:25:32 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.523 [2024-12-06 16:25:32.145040] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:09:50.523 [2024-12-06 16:25:32.145270] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:50.523 [2024-12-06 16:25:32.316767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.523 [2024-12-06 16:25:32.346597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.781 [2024-12-06 16:25:32.391099] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:50.781 [2024-12-06 16:25:32.391229] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:51.353 16:25:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:51.353 16:25:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:09:51.353 16:25:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:09:51.353 16:25:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.353 16:25:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.353 Base_1 00:09:51.353 16:25:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.353 16:25:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:09:51.353 16:25:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.353 16:25:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.353 Base_2 00:09:51.353 16:25:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.353 16:25:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:09:51.353 16:25:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:09:51.353 16:25:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.353 16:25:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.353 [2024-12-06 16:25:33.035189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:51.353 [2024-12-06 16:25:33.039548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:51.353 [2024-12-06 16:25:33.039718] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:09:51.353 [2024-12-06 16:25:33.039793] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:51.353 [2024-12-06 16:25:33.040440] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:09:51.353 [2024-12-06 16:25:33.040770] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:09:51.353 [2024-12-06 16:25:33.040826] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:09:51.353 [2024-12-06 16:25:33.041228] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:51.353 16:25:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.353 16:25:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:09:51.353 16:25:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.353 16:25:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.353 [2024-12-06 16:25:33.047925] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:51.353 [2024-12-06 16:25:33.047996] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:09:51.353 true 00:09:51.353 16:25:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.353 16:25:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:51.353 16:25:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:09:51.353 16:25:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.353 16:25:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.353 [2024-12-06 16:25:33.063930] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:51.354 16:25:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.354 16:25:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:09:51.354 16:25:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:09:51.354 16:25:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:09:51.354 16:25:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:09:51.354 16:25:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:09:51.354 16:25:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:09:51.354 16:25:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.354 16:25:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.354 [2024-12-06 16:25:33.107683] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:51.354 [2024-12-06 16:25:33.107706] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:09:51.354 [2024-12-06 16:25:33.107737] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:09:51.354 true 00:09:51.354 16:25:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.354 16:25:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:51.354 16:25:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.354 16:25:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.354 16:25:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:09:51.354 [2024-12-06 16:25:33.119855] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:51.354 16:25:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.354 16:25:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:09:51.354 16:25:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:09:51.354 16:25:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:09:51.354 16:25:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:09:51.354 16:25:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:09:51.354 16:25:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 72376 00:09:51.354 16:25:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 72376 ']' 00:09:51.354 16:25:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 72376 00:09:51.354 16:25:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:09:51.354 16:25:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:51.354 16:25:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72376 00:09:51.621 killing process with pid 72376 00:09:51.621 16:25:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:51.621 16:25:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:51.621 16:25:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72376' 00:09:51.621 16:25:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 72376 00:09:51.621 [2024-12-06 16:25:33.200482] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:51.621 16:25:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 72376 00:09:51.621 [2024-12-06 16:25:33.200598] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:51.621 [2024-12-06 16:25:33.200661] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:51.621 [2024-12-06 16:25:33.200671] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:09:51.621 [2024-12-06 16:25:33.202278] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:51.621 16:25:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:09:51.621 00:09:51.621 real 0m1.362s 00:09:51.621 user 0m1.556s 00:09:51.621 sys 0m0.293s 00:09:51.621 16:25:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.621 ************************************ 00:09:51.621 END TEST raid0_resize_test 00:09:51.621 ************************************ 00:09:51.621 16:25:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.879 16:25:33 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:09:51.879 16:25:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:51.879 16:25:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:51.879 16:25:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:51.879 ************************************ 00:09:51.879 START TEST raid1_resize_test 00:09:51.879 ************************************ 00:09:51.879 16:25:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:09:51.879 16:25:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:09:51.879 16:25:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:09:51.879 16:25:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:09:51.879 16:25:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:09:51.879 16:25:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:09:51.879 16:25:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:09:51.879 16:25:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:09:51.879 16:25:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:09:51.879 16:25:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=72427 00:09:51.879 16:25:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:51.879 16:25:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 72427' 00:09:51.879 Process raid pid: 72427 00:09:51.879 16:25:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 72427 00:09:51.879 16:25:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 72427 ']' 00:09:51.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.879 16:25:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.879 16:25:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:51.879 16:25:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.879 16:25:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:51.879 16:25:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.879 [2024-12-06 16:25:33.577137] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:09:51.879 [2024-12-06 16:25:33.577323] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:52.137 [2024-12-06 16:25:33.752057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.137 [2024-12-06 16:25:33.782523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.137 [2024-12-06 16:25:33.827775] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:52.137 [2024-12-06 16:25:33.827814] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:52.703 16:25:34 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:52.703 16:25:34 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:09:52.703 16:25:34 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:09:52.703 16:25:34 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.703 16:25:34 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.703 Base_1 00:09:52.703 16:25:34 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.703 16:25:34 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:09:52.703 16:25:34 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.703 16:25:34 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.703 Base_2 00:09:52.703 16:25:34 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.703 16:25:34 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:09:52.703 16:25:34 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:09:52.703 16:25:34 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.703 16:25:34 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.703 [2024-12-06 16:25:34.444233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:52.703 [2024-12-06 16:25:34.446080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:52.703 [2024-12-06 16:25:34.446146] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:09:52.703 [2024-12-06 16:25:34.446160] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:52.703 [2024-12-06 16:25:34.446431] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:09:52.704 [2024-12-06 16:25:34.446540] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:09:52.704 [2024-12-06 16:25:34.446550] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:09:52.704 [2024-12-06 16:25:34.446665] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:52.704 16:25:34 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.704 16:25:34 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:09:52.704 16:25:34 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.704 16:25:34 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.704 [2024-12-06 16:25:34.452188] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:52.704 [2024-12-06 16:25:34.452269] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:09:52.704 true 00:09:52.704 16:25:34 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.704 16:25:34 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:52.704 16:25:34 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:09:52.704 16:25:34 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.704 16:25:34 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.704 [2024-12-06 16:25:34.464348] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:52.704 16:25:34 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.704 16:25:34 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:09:52.704 16:25:34 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:09:52.704 16:25:34 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:09:52.704 16:25:34 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:09:52.704 16:25:34 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:09:52.704 16:25:34 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:09:52.704 16:25:34 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.704 16:25:34 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.704 [2024-12-06 16:25:34.512116] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:52.704 [2024-12-06 16:25:34.512141] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:09:52.704 [2024-12-06 16:25:34.512172] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:09:52.704 true 00:09:52.704 16:25:34 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.704 16:25:34 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:09:52.704 16:25:34 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:52.704 16:25:34 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.704 16:25:34 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.704 [2024-12-06 16:25:34.528290] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:52.963 16:25:34 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.963 16:25:34 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:09:52.963 16:25:34 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:09:52.963 16:25:34 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:09:52.963 16:25:34 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:09:52.963 16:25:34 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:09:52.963 16:25:34 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 72427 00:09:52.963 16:25:34 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 72427 ']' 00:09:52.963 16:25:34 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 72427 00:09:52.963 16:25:34 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:09:52.963 16:25:34 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:52.963 16:25:34 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72427 00:09:52.963 killing process with pid 72427 00:09:52.963 16:25:34 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:52.963 16:25:34 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:52.963 16:25:34 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72427' 00:09:52.963 16:25:34 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 72427 00:09:52.963 [2024-12-06 16:25:34.617134] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:52.963 [2024-12-06 16:25:34.617267] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:52.963 16:25:34 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 72427 00:09:52.963 [2024-12-06 16:25:34.617772] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:52.963 [2024-12-06 16:25:34.617858] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:09:52.963 [2024-12-06 16:25:34.619128] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:53.221 16:25:34 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:09:53.221 00:09:53.221 real 0m1.375s 00:09:53.221 user 0m1.559s 00:09:53.221 sys 0m0.316s 00:09:53.221 16:25:34 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:53.221 16:25:34 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.221 ************************************ 00:09:53.221 END TEST raid1_resize_test 00:09:53.221 ************************************ 00:09:53.221 16:25:34 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:53.221 16:25:34 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:53.221 16:25:34 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:09:53.221 16:25:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:53.221 16:25:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:53.221 16:25:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:53.221 ************************************ 00:09:53.221 START TEST raid_state_function_test 00:09:53.221 ************************************ 00:09:53.221 16:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:09:53.221 16:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:53.221 16:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:53.221 16:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:53.221 16:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:53.221 16:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:53.221 16:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:53.221 16:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:53.222 16:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:53.222 16:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:53.222 16:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:53.222 16:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:53.222 16:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:53.222 16:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:53.222 16:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:53.222 16:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:53.222 16:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:53.222 16:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:53.222 16:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:53.222 16:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:53.222 16:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:53.222 16:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:53.222 16:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:53.222 16:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:53.222 16:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=72473 00:09:53.222 16:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:53.222 16:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72473' 00:09:53.222 Process raid pid: 72473 00:09:53.222 16:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 72473 00:09:53.222 16:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 72473 ']' 00:09:53.222 16:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.222 16:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:53.222 16:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.222 16:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:53.222 16:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.222 [2024-12-06 16:25:35.030779] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:09:53.222 [2024-12-06 16:25:35.030917] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:53.480 [2024-12-06 16:25:35.204957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.480 [2024-12-06 16:25:35.235746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.480 [2024-12-06 16:25:35.280779] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.480 [2024-12-06 16:25:35.280817] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:54.413 16:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:54.413 16:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:54.413 16:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:54.413 16:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.413 16:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.413 [2024-12-06 16:25:35.900634] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:54.413 [2024-12-06 16:25:35.900752] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:54.413 [2024-12-06 16:25:35.900795] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:54.413 [2024-12-06 16:25:35.900808] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:54.413 16:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.413 16:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:54.413 16:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.413 16:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.413 16:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:54.413 16:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.413 16:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:54.413 16:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.413 16:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.413 16:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.413 16:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.413 16:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.413 16:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.414 16:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.414 16:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.414 16:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.414 16:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.414 "name": "Existed_Raid", 00:09:54.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.414 "strip_size_kb": 64, 00:09:54.414 "state": "configuring", 00:09:54.414 "raid_level": "raid0", 00:09:54.414 "superblock": false, 00:09:54.414 "num_base_bdevs": 2, 00:09:54.414 "num_base_bdevs_discovered": 0, 00:09:54.414 "num_base_bdevs_operational": 2, 00:09:54.414 "base_bdevs_list": [ 00:09:54.414 { 00:09:54.414 "name": "BaseBdev1", 00:09:54.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.414 "is_configured": false, 00:09:54.414 "data_offset": 0, 00:09:54.414 "data_size": 0 00:09:54.414 }, 00:09:54.414 { 00:09:54.414 "name": "BaseBdev2", 00:09:54.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.414 "is_configured": false, 00:09:54.414 "data_offset": 0, 00:09:54.414 "data_size": 0 00:09:54.414 } 00:09:54.414 ] 00:09:54.414 }' 00:09:54.414 16:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.414 16:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.672 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:54.672 16:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.672 16:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.672 [2024-12-06 16:25:36.359775] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:54.672 [2024-12-06 16:25:36.359823] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:54.672 16:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.672 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:54.672 16:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.672 16:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.672 [2024-12-06 16:25:36.371739] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:54.672 [2024-12-06 16:25:36.371783] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:54.672 [2024-12-06 16:25:36.371792] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:54.672 [2024-12-06 16:25:36.371800] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:54.672 16:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.672 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:54.672 16:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.672 16:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.672 [2024-12-06 16:25:36.393091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:54.672 BaseBdev1 00:09:54.672 16:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.672 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:54.672 16:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:54.672 16:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:54.672 16:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:54.672 16:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:54.672 16:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:54.672 16:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:54.672 16:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.672 16:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.672 16:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.672 16:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:54.672 16:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.672 16:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.672 [ 00:09:54.672 { 00:09:54.672 "name": "BaseBdev1", 00:09:54.672 "aliases": [ 00:09:54.672 "5f525361-c024-4bc7-b6f2-c6043a57be04" 00:09:54.672 ], 00:09:54.672 "product_name": "Malloc disk", 00:09:54.672 "block_size": 512, 00:09:54.672 "num_blocks": 65536, 00:09:54.672 "uuid": "5f525361-c024-4bc7-b6f2-c6043a57be04", 00:09:54.672 "assigned_rate_limits": { 00:09:54.672 "rw_ios_per_sec": 0, 00:09:54.672 "rw_mbytes_per_sec": 0, 00:09:54.672 "r_mbytes_per_sec": 0, 00:09:54.672 "w_mbytes_per_sec": 0 00:09:54.672 }, 00:09:54.672 "claimed": true, 00:09:54.672 "claim_type": "exclusive_write", 00:09:54.672 "zoned": false, 00:09:54.672 "supported_io_types": { 00:09:54.672 "read": true, 00:09:54.672 "write": true, 00:09:54.672 "unmap": true, 00:09:54.672 "flush": true, 00:09:54.672 "reset": true, 00:09:54.672 "nvme_admin": false, 00:09:54.672 "nvme_io": false, 00:09:54.672 "nvme_io_md": false, 00:09:54.672 "write_zeroes": true, 00:09:54.672 "zcopy": true, 00:09:54.672 "get_zone_info": false, 00:09:54.672 "zone_management": false, 00:09:54.672 "zone_append": false, 00:09:54.672 "compare": false, 00:09:54.672 "compare_and_write": false, 00:09:54.672 "abort": true, 00:09:54.672 "seek_hole": false, 00:09:54.672 "seek_data": false, 00:09:54.672 "copy": true, 00:09:54.672 "nvme_iov_md": false 00:09:54.672 }, 00:09:54.672 "memory_domains": [ 00:09:54.672 { 00:09:54.672 "dma_device_id": "system", 00:09:54.672 "dma_device_type": 1 00:09:54.672 }, 00:09:54.672 { 00:09:54.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.672 "dma_device_type": 2 00:09:54.673 } 00:09:54.673 ], 00:09:54.673 "driver_specific": {} 00:09:54.673 } 00:09:54.673 ] 00:09:54.673 16:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.673 16:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:54.673 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:54.673 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.673 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.673 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:54.673 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.673 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:54.673 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.673 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.673 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.673 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.673 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.673 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.673 16:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.673 16:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.673 16:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.673 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.673 "name": "Existed_Raid", 00:09:54.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.673 "strip_size_kb": 64, 00:09:54.673 "state": "configuring", 00:09:54.673 "raid_level": "raid0", 00:09:54.673 "superblock": false, 00:09:54.673 "num_base_bdevs": 2, 00:09:54.673 "num_base_bdevs_discovered": 1, 00:09:54.673 "num_base_bdevs_operational": 2, 00:09:54.673 "base_bdevs_list": [ 00:09:54.673 { 00:09:54.673 "name": "BaseBdev1", 00:09:54.673 "uuid": "5f525361-c024-4bc7-b6f2-c6043a57be04", 00:09:54.673 "is_configured": true, 00:09:54.673 "data_offset": 0, 00:09:54.673 "data_size": 65536 00:09:54.673 }, 00:09:54.673 { 00:09:54.673 "name": "BaseBdev2", 00:09:54.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.673 "is_configured": false, 00:09:54.673 "data_offset": 0, 00:09:54.673 "data_size": 0 00:09:54.673 } 00:09:54.673 ] 00:09:54.673 }' 00:09:54.673 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.673 16:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.239 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:55.239 16:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.239 16:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.239 [2024-12-06 16:25:36.892522] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:55.239 [2024-12-06 16:25:36.892586] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:55.239 16:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.239 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:55.239 16:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.239 16:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.239 [2024-12-06 16:25:36.904490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:55.239 [2024-12-06 16:25:36.906679] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:55.239 [2024-12-06 16:25:36.906783] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:55.239 16:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.239 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:55.239 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:55.239 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:55.239 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.239 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.239 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:55.239 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.239 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:55.239 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.239 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.239 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.239 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.239 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.239 16:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.239 16:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.239 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.239 16:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.239 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.239 "name": "Existed_Raid", 00:09:55.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.239 "strip_size_kb": 64, 00:09:55.239 "state": "configuring", 00:09:55.239 "raid_level": "raid0", 00:09:55.239 "superblock": false, 00:09:55.239 "num_base_bdevs": 2, 00:09:55.239 "num_base_bdevs_discovered": 1, 00:09:55.239 "num_base_bdevs_operational": 2, 00:09:55.239 "base_bdevs_list": [ 00:09:55.239 { 00:09:55.239 "name": "BaseBdev1", 00:09:55.239 "uuid": "5f525361-c024-4bc7-b6f2-c6043a57be04", 00:09:55.239 "is_configured": true, 00:09:55.239 "data_offset": 0, 00:09:55.239 "data_size": 65536 00:09:55.239 }, 00:09:55.239 { 00:09:55.239 "name": "BaseBdev2", 00:09:55.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.239 "is_configured": false, 00:09:55.239 "data_offset": 0, 00:09:55.239 "data_size": 0 00:09:55.239 } 00:09:55.239 ] 00:09:55.239 }' 00:09:55.239 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.239 16:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.498 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:55.498 16:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.498 16:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.757 [2024-12-06 16:25:37.343140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:55.757 [2024-12-06 16:25:37.343288] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:55.757 [2024-12-06 16:25:37.343347] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:55.757 [2024-12-06 16:25:37.343687] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:55.757 [2024-12-06 16:25:37.343899] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:55.757 [2024-12-06 16:25:37.343958] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:55.757 [2024-12-06 16:25:37.344258] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:55.757 BaseBdev2 00:09:55.757 16:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.757 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:55.757 16:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:55.757 16:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:55.757 16:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:55.757 16:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:55.757 16:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:55.757 16:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:55.757 16:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.757 16:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.757 16:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.757 16:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:55.757 16:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.757 16:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.757 [ 00:09:55.757 { 00:09:55.757 "name": "BaseBdev2", 00:09:55.757 "aliases": [ 00:09:55.757 "07bdba3a-1044-42aa-9c0b-547dd1105090" 00:09:55.757 ], 00:09:55.757 "product_name": "Malloc disk", 00:09:55.757 "block_size": 512, 00:09:55.757 "num_blocks": 65536, 00:09:55.757 "uuid": "07bdba3a-1044-42aa-9c0b-547dd1105090", 00:09:55.757 "assigned_rate_limits": { 00:09:55.757 "rw_ios_per_sec": 0, 00:09:55.757 "rw_mbytes_per_sec": 0, 00:09:55.757 "r_mbytes_per_sec": 0, 00:09:55.757 "w_mbytes_per_sec": 0 00:09:55.757 }, 00:09:55.757 "claimed": true, 00:09:55.757 "claim_type": "exclusive_write", 00:09:55.757 "zoned": false, 00:09:55.757 "supported_io_types": { 00:09:55.757 "read": true, 00:09:55.757 "write": true, 00:09:55.757 "unmap": true, 00:09:55.757 "flush": true, 00:09:55.757 "reset": true, 00:09:55.757 "nvme_admin": false, 00:09:55.757 "nvme_io": false, 00:09:55.757 "nvme_io_md": false, 00:09:55.757 "write_zeroes": true, 00:09:55.757 "zcopy": true, 00:09:55.757 "get_zone_info": false, 00:09:55.757 "zone_management": false, 00:09:55.757 "zone_append": false, 00:09:55.757 "compare": false, 00:09:55.757 "compare_and_write": false, 00:09:55.757 "abort": true, 00:09:55.757 "seek_hole": false, 00:09:55.757 "seek_data": false, 00:09:55.757 "copy": true, 00:09:55.757 "nvme_iov_md": false 00:09:55.757 }, 00:09:55.757 "memory_domains": [ 00:09:55.757 { 00:09:55.757 "dma_device_id": "system", 00:09:55.757 "dma_device_type": 1 00:09:55.757 }, 00:09:55.757 { 00:09:55.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.757 "dma_device_type": 2 00:09:55.757 } 00:09:55.757 ], 00:09:55.757 "driver_specific": {} 00:09:55.757 } 00:09:55.757 ] 00:09:55.757 16:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.757 16:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:55.757 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:55.757 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:55.757 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:09:55.757 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.757 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:55.757 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:55.757 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.757 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:55.757 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.757 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.757 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.757 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.757 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.757 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.757 16:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.757 16:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.757 16:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.757 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.757 "name": "Existed_Raid", 00:09:55.757 "uuid": "fe172d13-604c-4885-9c81-7eaf198255ba", 00:09:55.757 "strip_size_kb": 64, 00:09:55.757 "state": "online", 00:09:55.757 "raid_level": "raid0", 00:09:55.757 "superblock": false, 00:09:55.757 "num_base_bdevs": 2, 00:09:55.757 "num_base_bdevs_discovered": 2, 00:09:55.757 "num_base_bdevs_operational": 2, 00:09:55.757 "base_bdevs_list": [ 00:09:55.757 { 00:09:55.757 "name": "BaseBdev1", 00:09:55.757 "uuid": "5f525361-c024-4bc7-b6f2-c6043a57be04", 00:09:55.757 "is_configured": true, 00:09:55.757 "data_offset": 0, 00:09:55.757 "data_size": 65536 00:09:55.757 }, 00:09:55.757 { 00:09:55.757 "name": "BaseBdev2", 00:09:55.757 "uuid": "07bdba3a-1044-42aa-9c0b-547dd1105090", 00:09:55.757 "is_configured": true, 00:09:55.757 "data_offset": 0, 00:09:55.757 "data_size": 65536 00:09:55.757 } 00:09:55.757 ] 00:09:55.757 }' 00:09:55.757 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.757 16:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.016 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:56.016 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:56.016 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:56.016 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:56.016 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:56.016 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:56.016 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:56.016 16:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.016 16:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.016 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:56.016 [2024-12-06 16:25:37.822704] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:56.016 16:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.016 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:56.016 "name": "Existed_Raid", 00:09:56.016 "aliases": [ 00:09:56.016 "fe172d13-604c-4885-9c81-7eaf198255ba" 00:09:56.016 ], 00:09:56.016 "product_name": "Raid Volume", 00:09:56.016 "block_size": 512, 00:09:56.016 "num_blocks": 131072, 00:09:56.016 "uuid": "fe172d13-604c-4885-9c81-7eaf198255ba", 00:09:56.016 "assigned_rate_limits": { 00:09:56.016 "rw_ios_per_sec": 0, 00:09:56.016 "rw_mbytes_per_sec": 0, 00:09:56.016 "r_mbytes_per_sec": 0, 00:09:56.016 "w_mbytes_per_sec": 0 00:09:56.016 }, 00:09:56.016 "claimed": false, 00:09:56.016 "zoned": false, 00:09:56.016 "supported_io_types": { 00:09:56.016 "read": true, 00:09:56.016 "write": true, 00:09:56.016 "unmap": true, 00:09:56.016 "flush": true, 00:09:56.016 "reset": true, 00:09:56.016 "nvme_admin": false, 00:09:56.016 "nvme_io": false, 00:09:56.016 "nvme_io_md": false, 00:09:56.016 "write_zeroes": true, 00:09:56.016 "zcopy": false, 00:09:56.016 "get_zone_info": false, 00:09:56.016 "zone_management": false, 00:09:56.016 "zone_append": false, 00:09:56.016 "compare": false, 00:09:56.016 "compare_and_write": false, 00:09:56.016 "abort": false, 00:09:56.016 "seek_hole": false, 00:09:56.016 "seek_data": false, 00:09:56.016 "copy": false, 00:09:56.016 "nvme_iov_md": false 00:09:56.016 }, 00:09:56.016 "memory_domains": [ 00:09:56.016 { 00:09:56.016 "dma_device_id": "system", 00:09:56.016 "dma_device_type": 1 00:09:56.016 }, 00:09:56.016 { 00:09:56.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.016 "dma_device_type": 2 00:09:56.016 }, 00:09:56.016 { 00:09:56.016 "dma_device_id": "system", 00:09:56.016 "dma_device_type": 1 00:09:56.016 }, 00:09:56.016 { 00:09:56.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.016 "dma_device_type": 2 00:09:56.016 } 00:09:56.016 ], 00:09:56.016 "driver_specific": { 00:09:56.016 "raid": { 00:09:56.016 "uuid": "fe172d13-604c-4885-9c81-7eaf198255ba", 00:09:56.016 "strip_size_kb": 64, 00:09:56.016 "state": "online", 00:09:56.016 "raid_level": "raid0", 00:09:56.016 "superblock": false, 00:09:56.016 "num_base_bdevs": 2, 00:09:56.016 "num_base_bdevs_discovered": 2, 00:09:56.016 "num_base_bdevs_operational": 2, 00:09:56.016 "base_bdevs_list": [ 00:09:56.016 { 00:09:56.016 "name": "BaseBdev1", 00:09:56.016 "uuid": "5f525361-c024-4bc7-b6f2-c6043a57be04", 00:09:56.016 "is_configured": true, 00:09:56.016 "data_offset": 0, 00:09:56.016 "data_size": 65536 00:09:56.016 }, 00:09:56.016 { 00:09:56.016 "name": "BaseBdev2", 00:09:56.016 "uuid": "07bdba3a-1044-42aa-9c0b-547dd1105090", 00:09:56.016 "is_configured": true, 00:09:56.016 "data_offset": 0, 00:09:56.016 "data_size": 65536 00:09:56.016 } 00:09:56.016 ] 00:09:56.016 } 00:09:56.016 } 00:09:56.016 }' 00:09:56.016 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:56.301 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:56.301 BaseBdev2' 00:09:56.301 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.301 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:56.301 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:56.301 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.301 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:56.301 16:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.301 16:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.301 16:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.301 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:56.301 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:56.301 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:56.301 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.301 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:56.301 16:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.301 16:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.301 16:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.301 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:56.301 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:56.301 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:56.301 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.301 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.301 [2024-12-06 16:25:38.038060] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:56.301 [2024-12-06 16:25:38.038093] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:56.301 [2024-12-06 16:25:38.038151] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:56.301 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.301 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:56.301 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:56.301 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:56.301 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:56.301 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:56.301 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:09:56.301 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.301 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:56.301 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:56.301 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.301 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:56.301 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.301 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.301 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.301 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.301 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.301 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.301 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.301 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.301 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.301 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.301 "name": "Existed_Raid", 00:09:56.301 "uuid": "fe172d13-604c-4885-9c81-7eaf198255ba", 00:09:56.301 "strip_size_kb": 64, 00:09:56.301 "state": "offline", 00:09:56.301 "raid_level": "raid0", 00:09:56.301 "superblock": false, 00:09:56.301 "num_base_bdevs": 2, 00:09:56.301 "num_base_bdevs_discovered": 1, 00:09:56.301 "num_base_bdevs_operational": 1, 00:09:56.301 "base_bdevs_list": [ 00:09:56.301 { 00:09:56.301 "name": null, 00:09:56.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.301 "is_configured": false, 00:09:56.301 "data_offset": 0, 00:09:56.301 "data_size": 65536 00:09:56.301 }, 00:09:56.301 { 00:09:56.301 "name": "BaseBdev2", 00:09:56.301 "uuid": "07bdba3a-1044-42aa-9c0b-547dd1105090", 00:09:56.301 "is_configured": true, 00:09:56.301 "data_offset": 0, 00:09:56.301 "data_size": 65536 00:09:56.301 } 00:09:56.301 ] 00:09:56.301 }' 00:09:56.301 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.301 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.881 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:56.881 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:56.881 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.881 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.881 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.881 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:56.881 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.881 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:56.881 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:56.881 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:56.881 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.881 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.881 [2024-12-06 16:25:38.513117] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:56.881 [2024-12-06 16:25:38.513188] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:56.882 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.882 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:56.882 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:56.882 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.882 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.882 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.882 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:56.882 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.882 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:56.882 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:56.882 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:56.882 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 72473 00:09:56.882 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 72473 ']' 00:09:56.882 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 72473 00:09:56.882 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:56.882 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:56.882 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72473 00:09:56.882 killing process with pid 72473 00:09:56.882 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:56.882 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:56.882 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72473' 00:09:56.882 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 72473 00:09:56.882 [2024-12-06 16:25:38.624816] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:56.882 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 72473 00:09:56.882 [2024-12-06 16:25:38.625891] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:57.140 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:57.140 00:09:57.140 real 0m3.908s 00:09:57.140 user 0m6.166s 00:09:57.140 sys 0m0.746s 00:09:57.140 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:57.140 ************************************ 00:09:57.140 END TEST raid_state_function_test 00:09:57.140 ************************************ 00:09:57.140 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.140 16:25:38 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:09:57.140 16:25:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:57.140 16:25:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.140 16:25:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:57.140 ************************************ 00:09:57.140 START TEST raid_state_function_test_sb 00:09:57.140 ************************************ 00:09:57.140 16:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:09:57.140 16:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:57.140 16:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:57.140 16:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:57.140 16:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:57.140 16:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:57.140 16:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.140 16:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:57.140 16:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:57.140 16:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.140 16:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:57.141 16:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:57.141 16:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.141 16:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:57.141 16:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:57.141 16:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:57.141 16:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:57.141 16:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:57.141 16:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:57.141 16:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:57.141 16:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:57.141 16:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:57.141 16:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:57.141 16:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:57.141 16:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72715 00:09:57.141 16:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:57.141 16:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72715' 00:09:57.141 Process raid pid: 72715 00:09:57.141 16:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72715 00:09:57.141 16:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 72715 ']' 00:09:57.141 16:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.141 16:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:57.141 16:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.141 16:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:57.141 16:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.400 [2024-12-06 16:25:39.009278] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:09:57.400 [2024-12-06 16:25:39.009521] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:57.400 [2024-12-06 16:25:39.180949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.400 [2024-12-06 16:25:39.208954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.659 [2024-12-06 16:25:39.252820] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:57.659 [2024-12-06 16:25:39.252937] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:58.226 16:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:58.226 16:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:58.226 16:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:58.226 16:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.226 16:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.226 [2024-12-06 16:25:39.852473] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:58.226 [2024-12-06 16:25:39.852539] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:58.226 [2024-12-06 16:25:39.852555] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:58.226 [2024-12-06 16:25:39.852571] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:58.226 16:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.226 16:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:58.226 16:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.226 16:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.226 16:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:58.226 16:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.226 16:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:58.226 16:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.226 16:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.226 16:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.226 16:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.226 16:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.226 16:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.226 16:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.226 16:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.226 16:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.226 16:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.226 "name": "Existed_Raid", 00:09:58.226 "uuid": "d80c95d7-ea09-4b12-9402-b76ee1a69874", 00:09:58.226 "strip_size_kb": 64, 00:09:58.226 "state": "configuring", 00:09:58.226 "raid_level": "raid0", 00:09:58.226 "superblock": true, 00:09:58.226 "num_base_bdevs": 2, 00:09:58.226 "num_base_bdevs_discovered": 0, 00:09:58.226 "num_base_bdevs_operational": 2, 00:09:58.226 "base_bdevs_list": [ 00:09:58.226 { 00:09:58.226 "name": "BaseBdev1", 00:09:58.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.226 "is_configured": false, 00:09:58.226 "data_offset": 0, 00:09:58.226 "data_size": 0 00:09:58.226 }, 00:09:58.226 { 00:09:58.226 "name": "BaseBdev2", 00:09:58.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.226 "is_configured": false, 00:09:58.226 "data_offset": 0, 00:09:58.226 "data_size": 0 00:09:58.226 } 00:09:58.226 ] 00:09:58.226 }' 00:09:58.226 16:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.226 16:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.484 16:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:58.484 16:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.484 16:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.484 [2024-12-06 16:25:40.303580] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:58.484 [2024-12-06 16:25:40.303686] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:58.484 16:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.484 16:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:58.484 16:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.484 16:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.484 [2024-12-06 16:25:40.315588] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:58.484 [2024-12-06 16:25:40.315695] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:58.484 [2024-12-06 16:25:40.315741] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:58.484 [2024-12-06 16:25:40.315792] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:58.484 16:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.484 16:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:58.484 16:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.743 16:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.743 [2024-12-06 16:25:40.337271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:58.743 BaseBdev1 00:09:58.743 16:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.743 16:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:58.743 16:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:58.743 16:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:58.743 16:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:58.743 16:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:58.743 16:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:58.743 16:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:58.743 16:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.743 16:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.743 16:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.743 16:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:58.743 16:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.743 16:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.743 [ 00:09:58.743 { 00:09:58.743 "name": "BaseBdev1", 00:09:58.743 "aliases": [ 00:09:58.743 "ab7ad9ac-8d81-4bdd-a5f0-b1deaa51535e" 00:09:58.743 ], 00:09:58.743 "product_name": "Malloc disk", 00:09:58.743 "block_size": 512, 00:09:58.743 "num_blocks": 65536, 00:09:58.743 "uuid": "ab7ad9ac-8d81-4bdd-a5f0-b1deaa51535e", 00:09:58.743 "assigned_rate_limits": { 00:09:58.743 "rw_ios_per_sec": 0, 00:09:58.743 "rw_mbytes_per_sec": 0, 00:09:58.743 "r_mbytes_per_sec": 0, 00:09:58.743 "w_mbytes_per_sec": 0 00:09:58.743 }, 00:09:58.743 "claimed": true, 00:09:58.743 "claim_type": "exclusive_write", 00:09:58.743 "zoned": false, 00:09:58.743 "supported_io_types": { 00:09:58.743 "read": true, 00:09:58.743 "write": true, 00:09:58.743 "unmap": true, 00:09:58.743 "flush": true, 00:09:58.743 "reset": true, 00:09:58.743 "nvme_admin": false, 00:09:58.743 "nvme_io": false, 00:09:58.743 "nvme_io_md": false, 00:09:58.743 "write_zeroes": true, 00:09:58.743 "zcopy": true, 00:09:58.743 "get_zone_info": false, 00:09:58.743 "zone_management": false, 00:09:58.743 "zone_append": false, 00:09:58.743 "compare": false, 00:09:58.743 "compare_and_write": false, 00:09:58.743 "abort": true, 00:09:58.743 "seek_hole": false, 00:09:58.743 "seek_data": false, 00:09:58.743 "copy": true, 00:09:58.743 "nvme_iov_md": false 00:09:58.743 }, 00:09:58.743 "memory_domains": [ 00:09:58.743 { 00:09:58.743 "dma_device_id": "system", 00:09:58.743 "dma_device_type": 1 00:09:58.743 }, 00:09:58.743 { 00:09:58.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.743 "dma_device_type": 2 00:09:58.743 } 00:09:58.743 ], 00:09:58.743 "driver_specific": {} 00:09:58.743 } 00:09:58.743 ] 00:09:58.743 16:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.743 16:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:58.743 16:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:58.743 16:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.743 16:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.743 16:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:58.743 16:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.743 16:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:58.743 16:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.743 16:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.743 16:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.743 16:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.743 16:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.743 16:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.743 16:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.743 16:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.743 16:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.743 16:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.743 "name": "Existed_Raid", 00:09:58.743 "uuid": "a9eaf82b-d7d1-45aa-aebc-dd1a443f39e2", 00:09:58.743 "strip_size_kb": 64, 00:09:58.743 "state": "configuring", 00:09:58.743 "raid_level": "raid0", 00:09:58.743 "superblock": true, 00:09:58.743 "num_base_bdevs": 2, 00:09:58.743 "num_base_bdevs_discovered": 1, 00:09:58.743 "num_base_bdevs_operational": 2, 00:09:58.743 "base_bdevs_list": [ 00:09:58.743 { 00:09:58.744 "name": "BaseBdev1", 00:09:58.744 "uuid": "ab7ad9ac-8d81-4bdd-a5f0-b1deaa51535e", 00:09:58.744 "is_configured": true, 00:09:58.744 "data_offset": 2048, 00:09:58.744 "data_size": 63488 00:09:58.744 }, 00:09:58.744 { 00:09:58.744 "name": "BaseBdev2", 00:09:58.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.744 "is_configured": false, 00:09:58.744 "data_offset": 0, 00:09:58.744 "data_size": 0 00:09:58.744 } 00:09:58.744 ] 00:09:58.744 }' 00:09:58.744 16:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.744 16:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.003 16:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:59.003 16:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.003 16:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.003 [2024-12-06 16:25:40.820498] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:59.003 [2024-12-06 16:25:40.820554] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:59.003 16:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.003 16:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:59.003 16:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.003 16:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.003 [2024-12-06 16:25:40.832507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:59.003 [2024-12-06 16:25:40.834605] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:59.003 [2024-12-06 16:25:40.834683] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:59.003 16:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.003 16:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:59.003 16:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:59.003 16:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:59.003 16:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.003 16:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.003 16:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:59.003 16:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.003 16:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:59.003 16:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.003 16:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.264 16:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.264 16:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.264 16:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.264 16:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.264 16:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.264 16:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.264 16:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.264 16:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.264 "name": "Existed_Raid", 00:09:59.264 "uuid": "c3ab193c-3d77-4f33-bc17-d0797868c115", 00:09:59.264 "strip_size_kb": 64, 00:09:59.264 "state": "configuring", 00:09:59.264 "raid_level": "raid0", 00:09:59.264 "superblock": true, 00:09:59.264 "num_base_bdevs": 2, 00:09:59.264 "num_base_bdevs_discovered": 1, 00:09:59.264 "num_base_bdevs_operational": 2, 00:09:59.264 "base_bdevs_list": [ 00:09:59.264 { 00:09:59.264 "name": "BaseBdev1", 00:09:59.264 "uuid": "ab7ad9ac-8d81-4bdd-a5f0-b1deaa51535e", 00:09:59.264 "is_configured": true, 00:09:59.264 "data_offset": 2048, 00:09:59.264 "data_size": 63488 00:09:59.264 }, 00:09:59.264 { 00:09:59.264 "name": "BaseBdev2", 00:09:59.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.264 "is_configured": false, 00:09:59.264 "data_offset": 0, 00:09:59.264 "data_size": 0 00:09:59.264 } 00:09:59.264 ] 00:09:59.264 }' 00:09:59.264 16:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.264 16:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.525 16:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:59.525 16:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.525 16:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.525 [2024-12-06 16:25:41.271009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:59.525 [2024-12-06 16:25:41.271232] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:59.525 [2024-12-06 16:25:41.271257] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:59.525 BaseBdev2 00:09:59.525 [2024-12-06 16:25:41.271591] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:59.525 [2024-12-06 16:25:41.271779] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:59.525 [2024-12-06 16:25:41.271797] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:59.525 [2024-12-06 16:25:41.271940] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:59.525 16:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.525 16:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:59.525 16:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:59.525 16:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:59.526 16:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:59.526 16:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:59.526 16:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:59.526 16:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:59.526 16:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.526 16:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.526 16:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.526 16:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:59.526 16:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.526 16:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.526 [ 00:09:59.526 { 00:09:59.526 "name": "BaseBdev2", 00:09:59.526 "aliases": [ 00:09:59.526 "7d4dffb1-1b71-4608-a869-9262186774df" 00:09:59.526 ], 00:09:59.526 "product_name": "Malloc disk", 00:09:59.526 "block_size": 512, 00:09:59.526 "num_blocks": 65536, 00:09:59.526 "uuid": "7d4dffb1-1b71-4608-a869-9262186774df", 00:09:59.526 "assigned_rate_limits": { 00:09:59.526 "rw_ios_per_sec": 0, 00:09:59.526 "rw_mbytes_per_sec": 0, 00:09:59.526 "r_mbytes_per_sec": 0, 00:09:59.526 "w_mbytes_per_sec": 0 00:09:59.526 }, 00:09:59.526 "claimed": true, 00:09:59.526 "claim_type": "exclusive_write", 00:09:59.526 "zoned": false, 00:09:59.526 "supported_io_types": { 00:09:59.526 "read": true, 00:09:59.526 "write": true, 00:09:59.526 "unmap": true, 00:09:59.526 "flush": true, 00:09:59.526 "reset": true, 00:09:59.526 "nvme_admin": false, 00:09:59.526 "nvme_io": false, 00:09:59.526 "nvme_io_md": false, 00:09:59.526 "write_zeroes": true, 00:09:59.526 "zcopy": true, 00:09:59.526 "get_zone_info": false, 00:09:59.526 "zone_management": false, 00:09:59.526 "zone_append": false, 00:09:59.526 "compare": false, 00:09:59.526 "compare_and_write": false, 00:09:59.526 "abort": true, 00:09:59.526 "seek_hole": false, 00:09:59.526 "seek_data": false, 00:09:59.526 "copy": true, 00:09:59.526 "nvme_iov_md": false 00:09:59.526 }, 00:09:59.526 "memory_domains": [ 00:09:59.526 { 00:09:59.526 "dma_device_id": "system", 00:09:59.526 "dma_device_type": 1 00:09:59.526 }, 00:09:59.526 { 00:09:59.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.526 "dma_device_type": 2 00:09:59.526 } 00:09:59.526 ], 00:09:59.526 "driver_specific": {} 00:09:59.526 } 00:09:59.526 ] 00:09:59.526 16:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.526 16:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:59.526 16:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:59.526 16:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:59.526 16:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:09:59.526 16:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.526 16:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:59.526 16:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:59.526 16:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.526 16:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:59.526 16:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.526 16:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.526 16:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.526 16:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.526 16:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.526 16:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.526 16:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.526 16:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.526 16:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.787 16:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.787 "name": "Existed_Raid", 00:09:59.787 "uuid": "c3ab193c-3d77-4f33-bc17-d0797868c115", 00:09:59.787 "strip_size_kb": 64, 00:09:59.787 "state": "online", 00:09:59.787 "raid_level": "raid0", 00:09:59.787 "superblock": true, 00:09:59.787 "num_base_bdevs": 2, 00:09:59.787 "num_base_bdevs_discovered": 2, 00:09:59.787 "num_base_bdevs_operational": 2, 00:09:59.787 "base_bdevs_list": [ 00:09:59.787 { 00:09:59.787 "name": "BaseBdev1", 00:09:59.787 "uuid": "ab7ad9ac-8d81-4bdd-a5f0-b1deaa51535e", 00:09:59.787 "is_configured": true, 00:09:59.787 "data_offset": 2048, 00:09:59.787 "data_size": 63488 00:09:59.787 }, 00:09:59.787 { 00:09:59.787 "name": "BaseBdev2", 00:09:59.787 "uuid": "7d4dffb1-1b71-4608-a869-9262186774df", 00:09:59.787 "is_configured": true, 00:09:59.787 "data_offset": 2048, 00:09:59.787 "data_size": 63488 00:09:59.787 } 00:09:59.787 ] 00:09:59.787 }' 00:09:59.787 16:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.787 16:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.046 16:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:00.046 16:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:00.046 16:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:00.046 16:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:00.046 16:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:00.046 16:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:00.046 16:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:00.046 16:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.046 16:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.046 16:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:00.046 [2024-12-06 16:25:41.770597] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:00.046 16:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.046 16:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:00.046 "name": "Existed_Raid", 00:10:00.046 "aliases": [ 00:10:00.046 "c3ab193c-3d77-4f33-bc17-d0797868c115" 00:10:00.046 ], 00:10:00.046 "product_name": "Raid Volume", 00:10:00.046 "block_size": 512, 00:10:00.046 "num_blocks": 126976, 00:10:00.046 "uuid": "c3ab193c-3d77-4f33-bc17-d0797868c115", 00:10:00.046 "assigned_rate_limits": { 00:10:00.046 "rw_ios_per_sec": 0, 00:10:00.047 "rw_mbytes_per_sec": 0, 00:10:00.047 "r_mbytes_per_sec": 0, 00:10:00.047 "w_mbytes_per_sec": 0 00:10:00.047 }, 00:10:00.047 "claimed": false, 00:10:00.047 "zoned": false, 00:10:00.047 "supported_io_types": { 00:10:00.047 "read": true, 00:10:00.047 "write": true, 00:10:00.047 "unmap": true, 00:10:00.047 "flush": true, 00:10:00.047 "reset": true, 00:10:00.047 "nvme_admin": false, 00:10:00.047 "nvme_io": false, 00:10:00.047 "nvme_io_md": false, 00:10:00.047 "write_zeroes": true, 00:10:00.047 "zcopy": false, 00:10:00.047 "get_zone_info": false, 00:10:00.047 "zone_management": false, 00:10:00.047 "zone_append": false, 00:10:00.047 "compare": false, 00:10:00.047 "compare_and_write": false, 00:10:00.047 "abort": false, 00:10:00.047 "seek_hole": false, 00:10:00.047 "seek_data": false, 00:10:00.047 "copy": false, 00:10:00.047 "nvme_iov_md": false 00:10:00.047 }, 00:10:00.047 "memory_domains": [ 00:10:00.047 { 00:10:00.047 "dma_device_id": "system", 00:10:00.047 "dma_device_type": 1 00:10:00.047 }, 00:10:00.047 { 00:10:00.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.047 "dma_device_type": 2 00:10:00.047 }, 00:10:00.047 { 00:10:00.047 "dma_device_id": "system", 00:10:00.047 "dma_device_type": 1 00:10:00.047 }, 00:10:00.047 { 00:10:00.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.047 "dma_device_type": 2 00:10:00.047 } 00:10:00.047 ], 00:10:00.047 "driver_specific": { 00:10:00.047 "raid": { 00:10:00.047 "uuid": "c3ab193c-3d77-4f33-bc17-d0797868c115", 00:10:00.047 "strip_size_kb": 64, 00:10:00.047 "state": "online", 00:10:00.047 "raid_level": "raid0", 00:10:00.047 "superblock": true, 00:10:00.047 "num_base_bdevs": 2, 00:10:00.047 "num_base_bdevs_discovered": 2, 00:10:00.047 "num_base_bdevs_operational": 2, 00:10:00.047 "base_bdevs_list": [ 00:10:00.047 { 00:10:00.047 "name": "BaseBdev1", 00:10:00.047 "uuid": "ab7ad9ac-8d81-4bdd-a5f0-b1deaa51535e", 00:10:00.047 "is_configured": true, 00:10:00.047 "data_offset": 2048, 00:10:00.047 "data_size": 63488 00:10:00.047 }, 00:10:00.047 { 00:10:00.047 "name": "BaseBdev2", 00:10:00.047 "uuid": "7d4dffb1-1b71-4608-a869-9262186774df", 00:10:00.047 "is_configured": true, 00:10:00.047 "data_offset": 2048, 00:10:00.047 "data_size": 63488 00:10:00.047 } 00:10:00.047 ] 00:10:00.047 } 00:10:00.047 } 00:10:00.047 }' 00:10:00.047 16:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:00.047 16:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:00.047 BaseBdev2' 00:10:00.047 16:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.307 16:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:00.307 16:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:00.307 16:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:00.307 16:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.307 16:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.307 16:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.307 16:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.307 16:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:00.307 16:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:00.307 16:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:00.307 16:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:00.307 16:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.307 16:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.307 16:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.307 16:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.307 16:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:00.307 16:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:00.307 16:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:00.307 16:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.307 16:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.307 [2024-12-06 16:25:41.993907] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:00.307 [2024-12-06 16:25:41.994003] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:00.307 [2024-12-06 16:25:41.994084] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:00.307 16:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.307 16:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:00.307 16:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:00.307 16:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:00.307 16:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:00.307 16:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:00.307 16:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:10:00.307 16:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.307 16:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:00.307 16:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:00.307 16:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.307 16:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:00.307 16:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.307 16:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.307 16:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.307 16:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.307 16:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.307 16:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.307 16:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.307 16:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.307 16:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.307 16:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.307 "name": "Existed_Raid", 00:10:00.307 "uuid": "c3ab193c-3d77-4f33-bc17-d0797868c115", 00:10:00.307 "strip_size_kb": 64, 00:10:00.307 "state": "offline", 00:10:00.307 "raid_level": "raid0", 00:10:00.307 "superblock": true, 00:10:00.307 "num_base_bdevs": 2, 00:10:00.307 "num_base_bdevs_discovered": 1, 00:10:00.307 "num_base_bdevs_operational": 1, 00:10:00.307 "base_bdevs_list": [ 00:10:00.307 { 00:10:00.307 "name": null, 00:10:00.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.307 "is_configured": false, 00:10:00.307 "data_offset": 0, 00:10:00.307 "data_size": 63488 00:10:00.307 }, 00:10:00.307 { 00:10:00.307 "name": "BaseBdev2", 00:10:00.307 "uuid": "7d4dffb1-1b71-4608-a869-9262186774df", 00:10:00.307 "is_configured": true, 00:10:00.307 "data_offset": 2048, 00:10:00.307 "data_size": 63488 00:10:00.307 } 00:10:00.307 ] 00:10:00.307 }' 00:10:00.307 16:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.307 16:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.876 16:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:00.876 16:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:00.876 16:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.876 16:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.876 16:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:00.876 16:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.876 16:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.876 16:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:00.876 16:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:00.876 16:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:00.876 16:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.876 16:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.876 [2024-12-06 16:25:42.476887] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:00.876 [2024-12-06 16:25:42.477047] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:10:00.876 16:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.876 16:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:00.876 16:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:00.876 16:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.876 16:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.876 16:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.876 16:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:00.876 16:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.876 16:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:00.876 16:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:00.876 16:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:00.876 16:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72715 00:10:00.876 16:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 72715 ']' 00:10:00.876 16:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 72715 00:10:00.876 16:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:00.876 16:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:00.876 16:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72715 00:10:00.876 killing process with pid 72715 00:10:00.876 16:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:00.876 16:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:00.876 16:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72715' 00:10:00.876 16:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 72715 00:10:00.876 [2024-12-06 16:25:42.588234] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:00.876 16:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 72715 00:10:00.876 [2024-12-06 16:25:42.589347] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:01.134 ************************************ 00:10:01.134 END TEST raid_state_function_test_sb 00:10:01.134 ************************************ 00:10:01.134 16:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:01.134 00:10:01.134 real 0m3.897s 00:10:01.134 user 0m6.173s 00:10:01.134 sys 0m0.745s 00:10:01.134 16:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:01.134 16:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.134 16:25:42 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:10:01.134 16:25:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:01.134 16:25:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:01.134 16:25:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:01.134 ************************************ 00:10:01.134 START TEST raid_superblock_test 00:10:01.134 ************************************ 00:10:01.134 16:25:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:10:01.134 16:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:01.134 16:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:10:01.134 16:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:01.134 16:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:01.134 16:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:01.134 16:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:01.134 16:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:01.134 16:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:01.134 16:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:01.134 16:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:01.134 16:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:01.134 16:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:01.134 16:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:01.134 16:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:01.134 16:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:01.134 16:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:01.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.134 16:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72956 00:10:01.134 16:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:01.134 16:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72956 00:10:01.135 16:25:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72956 ']' 00:10:01.135 16:25:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.135 16:25:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:01.135 16:25:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.135 16:25:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:01.135 16:25:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.135 [2024-12-06 16:25:42.961941] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:10:01.135 [2024-12-06 16:25:42.962154] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72956 ] 00:10:01.446 [2024-12-06 16:25:43.133587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.446 [2024-12-06 16:25:43.161916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.446 [2024-12-06 16:25:43.207093] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:01.446 [2024-12-06 16:25:43.207269] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:02.040 16:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:02.040 16:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:02.040 16:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:02.040 16:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:02.040 16:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:02.040 16:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:02.040 16:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:02.040 16:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:02.040 16:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:02.040 16:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:02.040 16:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:02.040 16:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.040 16:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.040 malloc1 00:10:02.040 16:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.040 16:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:02.040 16:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.040 16:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.040 [2024-12-06 16:25:43.853752] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:02.040 [2024-12-06 16:25:43.853820] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:02.040 [2024-12-06 16:25:43.853840] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:02.040 [2024-12-06 16:25:43.853854] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:02.040 [2024-12-06 16:25:43.856154] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:02.040 [2024-12-06 16:25:43.856290] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:02.040 pt1 00:10:02.040 16:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.040 16:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:02.040 16:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:02.040 16:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:02.040 16:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:02.040 16:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:02.040 16:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:02.040 16:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:02.040 16:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:02.040 16:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:02.040 16:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.040 16:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.040 malloc2 00:10:02.040 16:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.040 16:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:02.040 16:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.040 16:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.299 [2024-12-06 16:25:43.882989] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:02.299 [2024-12-06 16:25:43.883094] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:02.299 [2024-12-06 16:25:43.883129] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:02.299 [2024-12-06 16:25:43.883164] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:02.299 [2024-12-06 16:25:43.885525] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:02.299 [2024-12-06 16:25:43.885600] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:02.299 pt2 00:10:02.299 16:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.299 16:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:02.299 16:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:02.299 16:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:10:02.299 16:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.299 16:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.299 [2024-12-06 16:25:43.894999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:02.299 [2024-12-06 16:25:43.896933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:02.299 [2024-12-06 16:25:43.897116] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:10:02.299 [2024-12-06 16:25:43.897167] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:02.299 [2024-12-06 16:25:43.897471] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:10:02.299 [2024-12-06 16:25:43.897644] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:10:02.299 [2024-12-06 16:25:43.897687] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:10:02.299 [2024-12-06 16:25:43.897849] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:02.299 16:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.299 16:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:02.299 16:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:02.299 16:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:02.299 16:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:02.299 16:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.299 16:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:02.299 16:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.299 16:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.299 16:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.299 16:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.299 16:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:02.299 16:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.299 16:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.299 16:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.299 16:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.299 16:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.299 "name": "raid_bdev1", 00:10:02.299 "uuid": "b5438568-cd53-497d-9650-b75f5a5e982b", 00:10:02.299 "strip_size_kb": 64, 00:10:02.299 "state": "online", 00:10:02.299 "raid_level": "raid0", 00:10:02.299 "superblock": true, 00:10:02.299 "num_base_bdevs": 2, 00:10:02.299 "num_base_bdevs_discovered": 2, 00:10:02.299 "num_base_bdevs_operational": 2, 00:10:02.299 "base_bdevs_list": [ 00:10:02.299 { 00:10:02.299 "name": "pt1", 00:10:02.299 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:02.299 "is_configured": true, 00:10:02.299 "data_offset": 2048, 00:10:02.299 "data_size": 63488 00:10:02.299 }, 00:10:02.299 { 00:10:02.299 "name": "pt2", 00:10:02.299 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:02.299 "is_configured": true, 00:10:02.299 "data_offset": 2048, 00:10:02.299 "data_size": 63488 00:10:02.299 } 00:10:02.299 ] 00:10:02.299 }' 00:10:02.299 16:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.299 16:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.558 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:02.558 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:02.558 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:02.558 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:02.558 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:02.558 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:02.558 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:02.558 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:02.558 16:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.558 16:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.558 [2024-12-06 16:25:44.330616] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:02.558 16:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.558 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:02.558 "name": "raid_bdev1", 00:10:02.558 "aliases": [ 00:10:02.558 "b5438568-cd53-497d-9650-b75f5a5e982b" 00:10:02.558 ], 00:10:02.558 "product_name": "Raid Volume", 00:10:02.558 "block_size": 512, 00:10:02.558 "num_blocks": 126976, 00:10:02.558 "uuid": "b5438568-cd53-497d-9650-b75f5a5e982b", 00:10:02.558 "assigned_rate_limits": { 00:10:02.558 "rw_ios_per_sec": 0, 00:10:02.558 "rw_mbytes_per_sec": 0, 00:10:02.558 "r_mbytes_per_sec": 0, 00:10:02.558 "w_mbytes_per_sec": 0 00:10:02.558 }, 00:10:02.558 "claimed": false, 00:10:02.558 "zoned": false, 00:10:02.558 "supported_io_types": { 00:10:02.558 "read": true, 00:10:02.558 "write": true, 00:10:02.558 "unmap": true, 00:10:02.558 "flush": true, 00:10:02.558 "reset": true, 00:10:02.558 "nvme_admin": false, 00:10:02.558 "nvme_io": false, 00:10:02.558 "nvme_io_md": false, 00:10:02.558 "write_zeroes": true, 00:10:02.558 "zcopy": false, 00:10:02.558 "get_zone_info": false, 00:10:02.558 "zone_management": false, 00:10:02.558 "zone_append": false, 00:10:02.558 "compare": false, 00:10:02.558 "compare_and_write": false, 00:10:02.558 "abort": false, 00:10:02.558 "seek_hole": false, 00:10:02.558 "seek_data": false, 00:10:02.558 "copy": false, 00:10:02.558 "nvme_iov_md": false 00:10:02.558 }, 00:10:02.558 "memory_domains": [ 00:10:02.558 { 00:10:02.558 "dma_device_id": "system", 00:10:02.558 "dma_device_type": 1 00:10:02.558 }, 00:10:02.558 { 00:10:02.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.558 "dma_device_type": 2 00:10:02.558 }, 00:10:02.558 { 00:10:02.558 "dma_device_id": "system", 00:10:02.558 "dma_device_type": 1 00:10:02.558 }, 00:10:02.558 { 00:10:02.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.558 "dma_device_type": 2 00:10:02.558 } 00:10:02.558 ], 00:10:02.558 "driver_specific": { 00:10:02.558 "raid": { 00:10:02.558 "uuid": "b5438568-cd53-497d-9650-b75f5a5e982b", 00:10:02.558 "strip_size_kb": 64, 00:10:02.558 "state": "online", 00:10:02.558 "raid_level": "raid0", 00:10:02.558 "superblock": true, 00:10:02.558 "num_base_bdevs": 2, 00:10:02.558 "num_base_bdevs_discovered": 2, 00:10:02.558 "num_base_bdevs_operational": 2, 00:10:02.558 "base_bdevs_list": [ 00:10:02.558 { 00:10:02.558 "name": "pt1", 00:10:02.558 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:02.558 "is_configured": true, 00:10:02.558 "data_offset": 2048, 00:10:02.558 "data_size": 63488 00:10:02.558 }, 00:10:02.558 { 00:10:02.558 "name": "pt2", 00:10:02.558 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:02.558 "is_configured": true, 00:10:02.558 "data_offset": 2048, 00:10:02.558 "data_size": 63488 00:10:02.558 } 00:10:02.558 ] 00:10:02.558 } 00:10:02.558 } 00:10:02.558 }' 00:10:02.558 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:02.817 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:02.817 pt2' 00:10:02.817 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.817 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:02.817 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:02.817 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:02.817 16:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.817 16:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.817 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.817 16:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.817 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:02.817 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:02.817 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:02.817 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:02.817 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.817 16:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.817 16:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.817 16:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.817 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:02.817 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:02.817 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:02.817 16:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.817 16:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.817 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:02.817 [2024-12-06 16:25:44.574143] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:02.817 16:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.817 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b5438568-cd53-497d-9650-b75f5a5e982b 00:10:02.817 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b5438568-cd53-497d-9650-b75f5a5e982b ']' 00:10:02.817 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:02.817 16:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.817 16:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.817 [2024-12-06 16:25:44.621783] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:02.817 [2024-12-06 16:25:44.621826] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:02.817 [2024-12-06 16:25:44.621924] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:02.817 [2024-12-06 16:25:44.621980] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:02.817 [2024-12-06 16:25:44.621990] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:10:02.817 16:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.817 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:02.817 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.817 16:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.817 16:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.817 16:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.076 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:03.076 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:03.076 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:03.076 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:03.076 16:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.076 16:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.076 16:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.076 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:03.076 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:03.076 16:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.076 16:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.076 16:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.076 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:03.076 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:03.076 16:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.076 16:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.076 16:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.076 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:03.076 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:03.076 16:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:03.076 16:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:03.076 16:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:03.076 16:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:03.076 16:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:03.076 16:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:03.076 16:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:03.076 16:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.076 16:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.076 [2024-12-06 16:25:44.765608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:03.076 [2024-12-06 16:25:44.767784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:03.076 [2024-12-06 16:25:44.767910] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:03.076 [2024-12-06 16:25:44.768018] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:03.076 [2024-12-06 16:25:44.768080] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:03.076 [2024-12-06 16:25:44.768125] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:10:03.076 request: 00:10:03.076 { 00:10:03.076 "name": "raid_bdev1", 00:10:03.076 "raid_level": "raid0", 00:10:03.076 "base_bdevs": [ 00:10:03.076 "malloc1", 00:10:03.076 "malloc2" 00:10:03.076 ], 00:10:03.076 "strip_size_kb": 64, 00:10:03.076 "superblock": false, 00:10:03.076 "method": "bdev_raid_create", 00:10:03.076 "req_id": 1 00:10:03.076 } 00:10:03.076 Got JSON-RPC error response 00:10:03.076 response: 00:10:03.076 { 00:10:03.076 "code": -17, 00:10:03.076 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:03.076 } 00:10:03.076 16:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:03.076 16:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:03.076 16:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:03.076 16:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:03.076 16:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:03.076 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.076 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:03.076 16:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.076 16:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.076 16:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.076 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:03.076 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:03.076 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:03.076 16:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.076 16:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.076 [2024-12-06 16:25:44.833394] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:03.076 [2024-12-06 16:25:44.833543] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:03.076 [2024-12-06 16:25:44.833588] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:03.076 [2024-12-06 16:25:44.833626] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:03.076 [2024-12-06 16:25:44.836131] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:03.076 [2024-12-06 16:25:44.836228] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:03.076 [2024-12-06 16:25:44.836390] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:03.076 [2024-12-06 16:25:44.836469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:03.076 pt1 00:10:03.076 16:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.076 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:10:03.076 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:03.076 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.076 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:03.077 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.077 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:03.077 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.077 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.077 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.077 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.077 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:03.077 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.077 16:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.077 16:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.077 16:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.077 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.077 "name": "raid_bdev1", 00:10:03.077 "uuid": "b5438568-cd53-497d-9650-b75f5a5e982b", 00:10:03.077 "strip_size_kb": 64, 00:10:03.077 "state": "configuring", 00:10:03.077 "raid_level": "raid0", 00:10:03.077 "superblock": true, 00:10:03.077 "num_base_bdevs": 2, 00:10:03.077 "num_base_bdevs_discovered": 1, 00:10:03.077 "num_base_bdevs_operational": 2, 00:10:03.077 "base_bdevs_list": [ 00:10:03.077 { 00:10:03.077 "name": "pt1", 00:10:03.077 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:03.077 "is_configured": true, 00:10:03.077 "data_offset": 2048, 00:10:03.077 "data_size": 63488 00:10:03.077 }, 00:10:03.077 { 00:10:03.077 "name": null, 00:10:03.077 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:03.077 "is_configured": false, 00:10:03.077 "data_offset": 2048, 00:10:03.077 "data_size": 63488 00:10:03.077 } 00:10:03.077 ] 00:10:03.077 }' 00:10:03.077 16:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.077 16:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.641 16:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:10:03.641 16:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:03.641 16:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:03.641 16:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:03.641 16:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.641 16:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.641 [2024-12-06 16:25:45.296673] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:03.641 [2024-12-06 16:25:45.296796] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:03.641 [2024-12-06 16:25:45.296844] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:10:03.641 [2024-12-06 16:25:45.296855] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:03.641 [2024-12-06 16:25:45.297376] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:03.641 [2024-12-06 16:25:45.297399] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:03.641 [2024-12-06 16:25:45.297479] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:03.641 [2024-12-06 16:25:45.297508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:03.641 [2024-12-06 16:25:45.297605] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:03.641 [2024-12-06 16:25:45.297615] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:03.641 [2024-12-06 16:25:45.297865] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:10:03.641 [2024-12-06 16:25:45.297991] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:03.642 [2024-12-06 16:25:45.298005] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:10:03.642 [2024-12-06 16:25:45.298114] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:03.642 pt2 00:10:03.642 16:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.642 16:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:03.642 16:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:03.642 16:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:03.642 16:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:03.642 16:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:03.642 16:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:03.642 16:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.642 16:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:03.642 16:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.642 16:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.642 16:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.642 16:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.642 16:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:03.642 16:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.642 16:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.642 16:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.642 16:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.642 16:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.642 "name": "raid_bdev1", 00:10:03.642 "uuid": "b5438568-cd53-497d-9650-b75f5a5e982b", 00:10:03.642 "strip_size_kb": 64, 00:10:03.642 "state": "online", 00:10:03.642 "raid_level": "raid0", 00:10:03.642 "superblock": true, 00:10:03.642 "num_base_bdevs": 2, 00:10:03.642 "num_base_bdevs_discovered": 2, 00:10:03.642 "num_base_bdevs_operational": 2, 00:10:03.642 "base_bdevs_list": [ 00:10:03.642 { 00:10:03.642 "name": "pt1", 00:10:03.642 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:03.642 "is_configured": true, 00:10:03.642 "data_offset": 2048, 00:10:03.642 "data_size": 63488 00:10:03.642 }, 00:10:03.642 { 00:10:03.642 "name": "pt2", 00:10:03.642 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:03.642 "is_configured": true, 00:10:03.642 "data_offset": 2048, 00:10:03.642 "data_size": 63488 00:10:03.642 } 00:10:03.642 ] 00:10:03.642 }' 00:10:03.642 16:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.642 16:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.207 16:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:04.207 16:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:04.207 16:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:04.207 16:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:04.207 16:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:04.207 16:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:04.207 16:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:04.207 16:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:04.207 16:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.207 16:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.207 [2024-12-06 16:25:45.756219] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:04.207 16:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.207 16:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:04.207 "name": "raid_bdev1", 00:10:04.207 "aliases": [ 00:10:04.207 "b5438568-cd53-497d-9650-b75f5a5e982b" 00:10:04.207 ], 00:10:04.208 "product_name": "Raid Volume", 00:10:04.208 "block_size": 512, 00:10:04.208 "num_blocks": 126976, 00:10:04.208 "uuid": "b5438568-cd53-497d-9650-b75f5a5e982b", 00:10:04.208 "assigned_rate_limits": { 00:10:04.208 "rw_ios_per_sec": 0, 00:10:04.208 "rw_mbytes_per_sec": 0, 00:10:04.208 "r_mbytes_per_sec": 0, 00:10:04.208 "w_mbytes_per_sec": 0 00:10:04.208 }, 00:10:04.208 "claimed": false, 00:10:04.208 "zoned": false, 00:10:04.208 "supported_io_types": { 00:10:04.208 "read": true, 00:10:04.208 "write": true, 00:10:04.208 "unmap": true, 00:10:04.208 "flush": true, 00:10:04.208 "reset": true, 00:10:04.208 "nvme_admin": false, 00:10:04.208 "nvme_io": false, 00:10:04.208 "nvme_io_md": false, 00:10:04.208 "write_zeroes": true, 00:10:04.208 "zcopy": false, 00:10:04.208 "get_zone_info": false, 00:10:04.208 "zone_management": false, 00:10:04.208 "zone_append": false, 00:10:04.208 "compare": false, 00:10:04.208 "compare_and_write": false, 00:10:04.208 "abort": false, 00:10:04.208 "seek_hole": false, 00:10:04.208 "seek_data": false, 00:10:04.208 "copy": false, 00:10:04.208 "nvme_iov_md": false 00:10:04.208 }, 00:10:04.208 "memory_domains": [ 00:10:04.208 { 00:10:04.208 "dma_device_id": "system", 00:10:04.208 "dma_device_type": 1 00:10:04.208 }, 00:10:04.208 { 00:10:04.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.208 "dma_device_type": 2 00:10:04.208 }, 00:10:04.208 { 00:10:04.208 "dma_device_id": "system", 00:10:04.208 "dma_device_type": 1 00:10:04.208 }, 00:10:04.208 { 00:10:04.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.208 "dma_device_type": 2 00:10:04.208 } 00:10:04.208 ], 00:10:04.208 "driver_specific": { 00:10:04.208 "raid": { 00:10:04.208 "uuid": "b5438568-cd53-497d-9650-b75f5a5e982b", 00:10:04.208 "strip_size_kb": 64, 00:10:04.208 "state": "online", 00:10:04.208 "raid_level": "raid0", 00:10:04.208 "superblock": true, 00:10:04.208 "num_base_bdevs": 2, 00:10:04.208 "num_base_bdevs_discovered": 2, 00:10:04.208 "num_base_bdevs_operational": 2, 00:10:04.208 "base_bdevs_list": [ 00:10:04.208 { 00:10:04.208 "name": "pt1", 00:10:04.208 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:04.208 "is_configured": true, 00:10:04.208 "data_offset": 2048, 00:10:04.208 "data_size": 63488 00:10:04.208 }, 00:10:04.208 { 00:10:04.208 "name": "pt2", 00:10:04.208 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:04.208 "is_configured": true, 00:10:04.208 "data_offset": 2048, 00:10:04.208 "data_size": 63488 00:10:04.208 } 00:10:04.208 ] 00:10:04.208 } 00:10:04.208 } 00:10:04.208 }' 00:10:04.208 16:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:04.208 16:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:04.208 pt2' 00:10:04.208 16:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.208 16:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:04.208 16:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.208 16:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.208 16:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:04.208 16:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.208 16:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.208 16:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.208 16:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.208 16:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.208 16:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.208 16:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:04.208 16:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.208 16:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.208 16:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.208 16:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.208 16:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.208 16:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.208 16:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:04.208 16:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:04.208 16:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.208 16:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.208 [2024-12-06 16:25:45.947873] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:04.208 16:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.208 16:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b5438568-cd53-497d-9650-b75f5a5e982b '!=' b5438568-cd53-497d-9650-b75f5a5e982b ']' 00:10:04.208 16:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:04.208 16:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:04.208 16:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:04.208 16:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72956 00:10:04.208 16:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72956 ']' 00:10:04.208 16:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72956 00:10:04.208 16:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:04.208 16:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:04.208 16:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72956 00:10:04.208 16:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:04.208 killing process with pid 72956 00:10:04.208 16:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:04.208 16:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72956' 00:10:04.208 16:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72956 00:10:04.208 [2024-12-06 16:25:46.030444] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:04.208 [2024-12-06 16:25:46.030561] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:04.208 16:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72956 00:10:04.208 [2024-12-06 16:25:46.030620] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:04.208 [2024-12-06 16:25:46.030630] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:10:04.466 [2024-12-06 16:25:46.055156] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:04.466 16:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:04.466 00:10:04.466 real 0m3.403s 00:10:04.466 user 0m5.251s 00:10:04.466 sys 0m0.767s 00:10:04.466 16:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.466 ************************************ 00:10:04.466 END TEST raid_superblock_test 00:10:04.466 ************************************ 00:10:04.466 16:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.723 16:25:46 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:10:04.723 16:25:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:04.723 16:25:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.723 16:25:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:04.723 ************************************ 00:10:04.723 START TEST raid_read_error_test 00:10:04.723 ************************************ 00:10:04.723 16:25:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:10:04.723 16:25:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:04.723 16:25:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:04.723 16:25:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:04.723 16:25:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:04.723 16:25:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:04.723 16:25:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:04.723 16:25:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:04.723 16:25:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:04.723 16:25:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:04.723 16:25:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:04.723 16:25:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:04.723 16:25:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:04.723 16:25:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:04.723 16:25:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:04.723 16:25:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:04.723 16:25:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:04.723 16:25:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:04.723 16:25:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:04.723 16:25:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:04.723 16:25:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:04.723 16:25:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:04.723 16:25:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:04.723 16:25:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.TSwKs4lQtE 00:10:04.723 16:25:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73151 00:10:04.723 16:25:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:04.723 16:25:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73151 00:10:04.723 16:25:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 73151 ']' 00:10:04.723 16:25:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.723 16:25:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:04.723 16:25:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.723 16:25:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:04.723 16:25:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.723 [2024-12-06 16:25:46.444383] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:10:04.723 [2024-12-06 16:25:46.444604] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73151 ] 00:10:04.981 [2024-12-06 16:25:46.615170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.981 [2024-12-06 16:25:46.643957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.981 [2024-12-06 16:25:46.689313] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:04.981 [2024-12-06 16:25:46.689406] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:05.548 16:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:05.549 16:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:05.549 16:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:05.549 16:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:05.549 16:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.549 16:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.549 BaseBdev1_malloc 00:10:05.549 16:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.549 16:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:05.549 16:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.549 16:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.549 true 00:10:05.549 16:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.549 16:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:05.549 16:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.549 16:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.549 [2024-12-06 16:25:47.330651] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:05.549 [2024-12-06 16:25:47.330718] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:05.549 [2024-12-06 16:25:47.330752] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:05.549 [2024-12-06 16:25:47.330761] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:05.549 [2024-12-06 16:25:47.333142] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:05.549 [2024-12-06 16:25:47.333184] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:05.549 BaseBdev1 00:10:05.549 16:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.549 16:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:05.549 16:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:05.549 16:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.549 16:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.549 BaseBdev2_malloc 00:10:05.549 16:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.549 16:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:05.549 16:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.549 16:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.549 true 00:10:05.549 16:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.549 16:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:05.549 16:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.549 16:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.549 [2024-12-06 16:25:47.371750] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:05.549 [2024-12-06 16:25:47.371871] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:05.549 [2024-12-06 16:25:47.371897] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:05.549 [2024-12-06 16:25:47.371906] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:05.549 [2024-12-06 16:25:47.374268] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:05.549 [2024-12-06 16:25:47.374305] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:05.549 BaseBdev2 00:10:05.549 16:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.549 16:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:05.549 16:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.549 16:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.549 [2024-12-06 16:25:47.383783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:05.549 [2024-12-06 16:25:47.385931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:05.549 [2024-12-06 16:25:47.386185] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:05.549 [2024-12-06 16:25:47.386218] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:05.807 [2024-12-06 16:25:47.386547] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:05.807 [2024-12-06 16:25:47.386733] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:05.807 [2024-12-06 16:25:47.386747] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:10:05.807 [2024-12-06 16:25:47.386911] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:05.807 16:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.807 16:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:05.807 16:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:05.807 16:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:05.807 16:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:05.807 16:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.807 16:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:05.807 16:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.807 16:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.807 16:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.807 16:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.807 16:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.807 16:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:05.807 16:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.807 16:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.807 16:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.807 16:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.807 "name": "raid_bdev1", 00:10:05.807 "uuid": "209f59bc-6273-4086-9fee-e2119671e708", 00:10:05.807 "strip_size_kb": 64, 00:10:05.807 "state": "online", 00:10:05.807 "raid_level": "raid0", 00:10:05.807 "superblock": true, 00:10:05.807 "num_base_bdevs": 2, 00:10:05.807 "num_base_bdevs_discovered": 2, 00:10:05.807 "num_base_bdevs_operational": 2, 00:10:05.807 "base_bdevs_list": [ 00:10:05.807 { 00:10:05.807 "name": "BaseBdev1", 00:10:05.807 "uuid": "a146b6bb-4da4-57fe-a5c3-10ff2e1cafb1", 00:10:05.807 "is_configured": true, 00:10:05.807 "data_offset": 2048, 00:10:05.807 "data_size": 63488 00:10:05.807 }, 00:10:05.807 { 00:10:05.807 "name": "BaseBdev2", 00:10:05.807 "uuid": "2c070165-c061-52fb-9cd4-4773551287cc", 00:10:05.807 "is_configured": true, 00:10:05.807 "data_offset": 2048, 00:10:05.807 "data_size": 63488 00:10:05.807 } 00:10:05.807 ] 00:10:05.807 }' 00:10:05.807 16:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.807 16:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.076 16:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:06.076 16:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:06.076 [2024-12-06 16:25:47.903345] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:07.026 16:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:07.026 16:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.026 16:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.026 16:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.026 16:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:07.026 16:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:07.026 16:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:10:07.026 16:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:07.026 16:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:07.026 16:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:07.026 16:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:07.026 16:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.026 16:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:07.026 16:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.026 16:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.026 16:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.026 16:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.026 16:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.026 16:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:07.026 16:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.026 16:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.026 16:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.285 16:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.285 "name": "raid_bdev1", 00:10:07.285 "uuid": "209f59bc-6273-4086-9fee-e2119671e708", 00:10:07.285 "strip_size_kb": 64, 00:10:07.285 "state": "online", 00:10:07.285 "raid_level": "raid0", 00:10:07.285 "superblock": true, 00:10:07.285 "num_base_bdevs": 2, 00:10:07.285 "num_base_bdevs_discovered": 2, 00:10:07.285 "num_base_bdevs_operational": 2, 00:10:07.285 "base_bdevs_list": [ 00:10:07.285 { 00:10:07.285 "name": "BaseBdev1", 00:10:07.285 "uuid": "a146b6bb-4da4-57fe-a5c3-10ff2e1cafb1", 00:10:07.285 "is_configured": true, 00:10:07.285 "data_offset": 2048, 00:10:07.285 "data_size": 63488 00:10:07.285 }, 00:10:07.285 { 00:10:07.285 "name": "BaseBdev2", 00:10:07.285 "uuid": "2c070165-c061-52fb-9cd4-4773551287cc", 00:10:07.285 "is_configured": true, 00:10:07.285 "data_offset": 2048, 00:10:07.285 "data_size": 63488 00:10:07.285 } 00:10:07.285 ] 00:10:07.285 }' 00:10:07.285 16:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.285 16:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.545 16:25:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:07.545 16:25:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.545 16:25:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.545 [2024-12-06 16:25:49.292274] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:07.545 [2024-12-06 16:25:49.292416] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:07.545 [2024-12-06 16:25:49.295527] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:07.545 [2024-12-06 16:25:49.295632] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:07.545 [2024-12-06 16:25:49.295680] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:07.545 [2024-12-06 16:25:49.295691] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:10:07.545 { 00:10:07.545 "results": [ 00:10:07.545 { 00:10:07.545 "job": "raid_bdev1", 00:10:07.545 "core_mask": "0x1", 00:10:07.545 "workload": "randrw", 00:10:07.545 "percentage": 50, 00:10:07.545 "status": "finished", 00:10:07.545 "queue_depth": 1, 00:10:07.545 "io_size": 131072, 00:10:07.545 "runtime": 1.389438, 00:10:07.545 "iops": 15008.946063084499, 00:10:07.545 "mibps": 1876.1182578855623, 00:10:07.545 "io_failed": 1, 00:10:07.545 "io_timeout": 0, 00:10:07.545 "avg_latency_us": 92.03339624083529, 00:10:07.545 "min_latency_us": 27.053275109170304, 00:10:07.545 "max_latency_us": 1645.5545851528384 00:10:07.545 } 00:10:07.545 ], 00:10:07.545 "core_count": 1 00:10:07.545 } 00:10:07.545 16:25:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.545 16:25:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73151 00:10:07.545 16:25:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 73151 ']' 00:10:07.545 16:25:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 73151 00:10:07.545 16:25:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:07.545 16:25:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:07.545 16:25:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73151 00:10:07.545 16:25:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:07.545 16:25:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:07.545 16:25:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73151' 00:10:07.545 killing process with pid 73151 00:10:07.545 16:25:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 73151 00:10:07.545 [2024-12-06 16:25:49.347554] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:07.545 16:25:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 73151 00:10:07.545 [2024-12-06 16:25:49.363918] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:07.806 16:25:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.TSwKs4lQtE 00:10:07.806 16:25:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:07.806 16:25:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:07.806 16:25:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:07.806 16:25:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:07.806 16:25:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:07.806 16:25:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:07.806 16:25:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:07.806 00:10:07.806 real 0m3.252s 00:10:07.806 user 0m4.176s 00:10:07.806 sys 0m0.520s 00:10:07.806 ************************************ 00:10:07.806 END TEST raid_read_error_test 00:10:07.806 ************************************ 00:10:07.806 16:25:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.806 16:25:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.065 16:25:49 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:10:08.065 16:25:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:08.065 16:25:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.065 16:25:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:08.065 ************************************ 00:10:08.065 START TEST raid_write_error_test 00:10:08.065 ************************************ 00:10:08.065 16:25:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:10:08.066 16:25:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:08.066 16:25:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:08.066 16:25:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:08.066 16:25:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:08.066 16:25:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:08.066 16:25:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:08.066 16:25:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:08.066 16:25:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:08.066 16:25:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:08.066 16:25:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:08.066 16:25:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:08.066 16:25:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:08.066 16:25:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:08.093 16:25:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:08.093 16:25:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:08.093 16:25:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:08.093 16:25:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:08.093 16:25:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:08.093 16:25:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:08.093 16:25:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:08.093 16:25:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:08.093 16:25:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:08.093 16:25:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.7Wyqn0XEFY 00:10:08.093 16:25:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73280 00:10:08.093 16:25:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:08.093 16:25:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73280 00:10:08.093 16:25:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73280 ']' 00:10:08.094 16:25:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.094 16:25:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:08.094 16:25:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.094 16:25:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:08.094 16:25:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.094 [2024-12-06 16:25:49.766662] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:10:08.094 [2024-12-06 16:25:49.766790] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73280 ] 00:10:08.354 [2024-12-06 16:25:49.938618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.354 [2024-12-06 16:25:49.967839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.354 [2024-12-06 16:25:50.011406] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:08.354 [2024-12-06 16:25:50.011534] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:08.923 16:25:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:08.923 16:25:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:08.923 16:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:08.923 16:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:08.923 16:25:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.923 16:25:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.923 BaseBdev1_malloc 00:10:08.923 16:25:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.923 16:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:08.923 16:25:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.923 16:25:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.923 true 00:10:08.923 16:25:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.923 16:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:08.923 16:25:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.923 16:25:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.923 [2024-12-06 16:25:50.651808] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:08.923 [2024-12-06 16:25:50.651860] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:08.923 [2024-12-06 16:25:50.651880] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:08.923 [2024-12-06 16:25:50.651889] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:08.923 [2024-12-06 16:25:50.654046] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:08.923 [2024-12-06 16:25:50.654135] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:08.923 BaseBdev1 00:10:08.923 16:25:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.923 16:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:08.923 16:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:08.923 16:25:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.923 16:25:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.923 BaseBdev2_malloc 00:10:08.923 16:25:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.923 16:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:08.923 16:25:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.923 16:25:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.923 true 00:10:08.923 16:25:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.923 16:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:08.923 16:25:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.923 16:25:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.923 [2024-12-06 16:25:50.692465] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:08.923 [2024-12-06 16:25:50.692553] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:08.923 [2024-12-06 16:25:50.692575] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:08.923 [2024-12-06 16:25:50.692584] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:08.923 [2024-12-06 16:25:50.694577] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:08.923 [2024-12-06 16:25:50.694616] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:08.923 BaseBdev2 00:10:08.923 16:25:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.923 16:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:08.923 16:25:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.923 16:25:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.923 [2024-12-06 16:25:50.704498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:08.923 [2024-12-06 16:25:50.706304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:08.923 [2024-12-06 16:25:50.706462] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:08.923 [2024-12-06 16:25:50.706480] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:08.923 [2024-12-06 16:25:50.706722] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:08.923 [2024-12-06 16:25:50.706857] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:08.923 [2024-12-06 16:25:50.706869] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:10:08.923 [2024-12-06 16:25:50.706991] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:08.923 16:25:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.923 16:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:08.923 16:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:08.923 16:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:08.923 16:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:08.923 16:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.923 16:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:08.923 16:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.923 16:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.923 16:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.923 16:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.923 16:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.924 16:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:08.924 16:25:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.924 16:25:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.924 16:25:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.184 16:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.184 "name": "raid_bdev1", 00:10:09.184 "uuid": "0b7e5145-6e75-4185-97ef-3bd33f3ada26", 00:10:09.184 "strip_size_kb": 64, 00:10:09.184 "state": "online", 00:10:09.184 "raid_level": "raid0", 00:10:09.184 "superblock": true, 00:10:09.184 "num_base_bdevs": 2, 00:10:09.184 "num_base_bdevs_discovered": 2, 00:10:09.184 "num_base_bdevs_operational": 2, 00:10:09.184 "base_bdevs_list": [ 00:10:09.184 { 00:10:09.184 "name": "BaseBdev1", 00:10:09.184 "uuid": "495fbced-2a2d-5399-8032-2a73502ac773", 00:10:09.184 "is_configured": true, 00:10:09.184 "data_offset": 2048, 00:10:09.184 "data_size": 63488 00:10:09.184 }, 00:10:09.184 { 00:10:09.184 "name": "BaseBdev2", 00:10:09.184 "uuid": "c2ae51ac-487c-585d-9d9d-a4799454aee0", 00:10:09.184 "is_configured": true, 00:10:09.184 "data_offset": 2048, 00:10:09.184 "data_size": 63488 00:10:09.184 } 00:10:09.184 ] 00:10:09.184 }' 00:10:09.184 16:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.184 16:25:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.444 16:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:09.444 16:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:09.444 [2024-12-06 16:25:51.275943] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:10.383 16:25:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:10.383 16:25:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.383 16:25:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.383 16:25:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.383 16:25:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:10.383 16:25:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:10.383 16:25:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:10:10.383 16:25:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:10.383 16:25:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:10.383 16:25:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:10.383 16:25:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:10.383 16:25:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.383 16:25:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:10.383 16:25:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.383 16:25:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.383 16:25:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.383 16:25:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.641 16:25:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.641 16:25:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.641 16:25:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.641 16:25:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:10.641 16:25:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.641 16:25:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.641 "name": "raid_bdev1", 00:10:10.641 "uuid": "0b7e5145-6e75-4185-97ef-3bd33f3ada26", 00:10:10.641 "strip_size_kb": 64, 00:10:10.641 "state": "online", 00:10:10.641 "raid_level": "raid0", 00:10:10.641 "superblock": true, 00:10:10.641 "num_base_bdevs": 2, 00:10:10.641 "num_base_bdevs_discovered": 2, 00:10:10.641 "num_base_bdevs_operational": 2, 00:10:10.641 "base_bdevs_list": [ 00:10:10.641 { 00:10:10.641 "name": "BaseBdev1", 00:10:10.641 "uuid": "495fbced-2a2d-5399-8032-2a73502ac773", 00:10:10.641 "is_configured": true, 00:10:10.641 "data_offset": 2048, 00:10:10.641 "data_size": 63488 00:10:10.641 }, 00:10:10.641 { 00:10:10.641 "name": "BaseBdev2", 00:10:10.641 "uuid": "c2ae51ac-487c-585d-9d9d-a4799454aee0", 00:10:10.641 "is_configured": true, 00:10:10.641 "data_offset": 2048, 00:10:10.641 "data_size": 63488 00:10:10.641 } 00:10:10.641 ] 00:10:10.641 }' 00:10:10.641 16:25:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.641 16:25:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.899 16:25:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:10.899 16:25:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.899 16:25:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.899 [2024-12-06 16:25:52.676271] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:10.899 [2024-12-06 16:25:52.676308] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:10.899 { 00:10:10.899 "results": [ 00:10:10.899 { 00:10:10.899 "job": "raid_bdev1", 00:10:10.899 "core_mask": "0x1", 00:10:10.899 "workload": "randrw", 00:10:10.899 "percentage": 50, 00:10:10.899 "status": "finished", 00:10:10.899 "queue_depth": 1, 00:10:10.899 "io_size": 131072, 00:10:10.899 "runtime": 1.401225, 00:10:10.899 "iops": 15426.50181091545, 00:10:10.899 "mibps": 1928.3127263644312, 00:10:10.899 "io_failed": 1, 00:10:10.899 "io_timeout": 0, 00:10:10.899 "avg_latency_us": 89.5792109275148, 00:10:10.899 "min_latency_us": 26.606113537117903, 00:10:10.899 "max_latency_us": 1638.4 00:10:10.899 } 00:10:10.899 ], 00:10:10.899 "core_count": 1 00:10:10.899 } 00:10:10.899 [2024-12-06 16:25:52.679301] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:10.899 [2024-12-06 16:25:52.679375] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:10.899 [2024-12-06 16:25:52.679416] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:10.899 [2024-12-06 16:25:52.679427] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:10:10.899 16:25:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.899 16:25:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73280 00:10:10.899 16:25:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73280 ']' 00:10:10.899 16:25:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73280 00:10:10.899 16:25:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:10.899 16:25:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:10.899 16:25:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73280 00:10:10.899 16:25:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:10.899 16:25:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:10.899 16:25:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73280' 00:10:10.899 killing process with pid 73280 00:10:10.899 16:25:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73280 00:10:10.899 [2024-12-06 16:25:52.730683] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:10.900 16:25:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73280 00:10:11.231 [2024-12-06 16:25:52.747128] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:11.231 16:25:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:11.231 16:25:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.7Wyqn0XEFY 00:10:11.231 16:25:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:11.231 16:25:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:10:11.231 16:25:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:11.231 16:25:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:11.231 16:25:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:11.231 16:25:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:10:11.231 00:10:11.231 real 0m3.305s 00:10:11.231 user 0m4.267s 00:10:11.231 sys 0m0.509s 00:10:11.231 16:25:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.231 16:25:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.231 ************************************ 00:10:11.231 END TEST raid_write_error_test 00:10:11.231 ************************************ 00:10:11.231 16:25:53 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:11.231 16:25:53 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:10:11.231 16:25:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:11.231 16:25:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.231 16:25:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:11.231 ************************************ 00:10:11.231 START TEST raid_state_function_test 00:10:11.231 ************************************ 00:10:11.231 16:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:10:11.232 16:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:11.232 16:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:11.232 16:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:11.232 16:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:11.232 16:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:11.232 16:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:11.232 16:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:11.232 16:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:11.232 16:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:11.232 16:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:11.232 16:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:11.232 16:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:11.506 16:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:11.506 16:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:11.506 16:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:11.506 16:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:11.506 16:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:11.506 16:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:11.506 16:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:11.506 16:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:11.506 16:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:11.506 Process raid pid: 73417 00:10:11.506 16:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:11.506 16:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:11.506 16:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73417 00:10:11.506 16:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:11.506 16:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73417' 00:10:11.506 16:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73417 00:10:11.506 16:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73417 ']' 00:10:11.506 16:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.506 16:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:11.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.506 16:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.506 16:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:11.506 16:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.506 [2024-12-06 16:25:53.135832] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:10:11.506 [2024-12-06 16:25:53.135950] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:11.506 [2024-12-06 16:25:53.308127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.506 [2024-12-06 16:25:53.338120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.765 [2024-12-06 16:25:53.381436] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:11.765 [2024-12-06 16:25:53.381500] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:12.329 16:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:12.329 16:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:12.329 16:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:12.329 16:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.329 16:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.329 [2024-12-06 16:25:53.976183] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:12.329 [2024-12-06 16:25:53.976337] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:12.329 [2024-12-06 16:25:53.976352] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:12.329 [2024-12-06 16:25:53.976364] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:12.329 16:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.329 16:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:12.329 16:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.329 16:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.329 16:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:12.329 16:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.329 16:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:12.329 16:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.329 16:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.329 16:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.329 16:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.329 16:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.329 16:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.329 16:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.329 16:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.329 16:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.329 16:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.329 "name": "Existed_Raid", 00:10:12.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.329 "strip_size_kb": 64, 00:10:12.329 "state": "configuring", 00:10:12.329 "raid_level": "concat", 00:10:12.329 "superblock": false, 00:10:12.329 "num_base_bdevs": 2, 00:10:12.329 "num_base_bdevs_discovered": 0, 00:10:12.329 "num_base_bdevs_operational": 2, 00:10:12.329 "base_bdevs_list": [ 00:10:12.329 { 00:10:12.329 "name": "BaseBdev1", 00:10:12.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.329 "is_configured": false, 00:10:12.329 "data_offset": 0, 00:10:12.329 "data_size": 0 00:10:12.329 }, 00:10:12.329 { 00:10:12.329 "name": "BaseBdev2", 00:10:12.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.329 "is_configured": false, 00:10:12.329 "data_offset": 0, 00:10:12.329 "data_size": 0 00:10:12.329 } 00:10:12.329 ] 00:10:12.329 }' 00:10:12.329 16:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.329 16:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.587 16:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:12.587 16:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.587 16:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.588 [2024-12-06 16:25:54.387425] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:12.588 [2024-12-06 16:25:54.387483] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:10:12.588 16:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.588 16:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:12.588 16:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.588 16:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.588 [2024-12-06 16:25:54.399411] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:12.588 [2024-12-06 16:25:54.399503] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:12.588 [2024-12-06 16:25:54.399535] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:12.588 [2024-12-06 16:25:54.399564] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:12.588 16:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.588 16:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:12.588 16:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.588 16:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.588 [2024-12-06 16:25:54.420451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:12.588 BaseBdev1 00:10:12.588 16:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.588 16:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:12.588 16:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:12.588 16:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:12.588 16:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:12.588 16:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:12.588 16:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:12.588 16:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:12.588 16:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.846 16:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.846 16:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.846 16:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:12.846 16:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.846 16:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.846 [ 00:10:12.846 { 00:10:12.846 "name": "BaseBdev1", 00:10:12.846 "aliases": [ 00:10:12.846 "ef99f504-2cad-405b-a82a-0b0bc24c85a6" 00:10:12.846 ], 00:10:12.846 "product_name": "Malloc disk", 00:10:12.846 "block_size": 512, 00:10:12.846 "num_blocks": 65536, 00:10:12.846 "uuid": "ef99f504-2cad-405b-a82a-0b0bc24c85a6", 00:10:12.846 "assigned_rate_limits": { 00:10:12.846 "rw_ios_per_sec": 0, 00:10:12.846 "rw_mbytes_per_sec": 0, 00:10:12.846 "r_mbytes_per_sec": 0, 00:10:12.846 "w_mbytes_per_sec": 0 00:10:12.846 }, 00:10:12.846 "claimed": true, 00:10:12.846 "claim_type": "exclusive_write", 00:10:12.846 "zoned": false, 00:10:12.846 "supported_io_types": { 00:10:12.846 "read": true, 00:10:12.846 "write": true, 00:10:12.846 "unmap": true, 00:10:12.846 "flush": true, 00:10:12.846 "reset": true, 00:10:12.846 "nvme_admin": false, 00:10:12.846 "nvme_io": false, 00:10:12.846 "nvme_io_md": false, 00:10:12.846 "write_zeroes": true, 00:10:12.846 "zcopy": true, 00:10:12.846 "get_zone_info": false, 00:10:12.846 "zone_management": false, 00:10:12.846 "zone_append": false, 00:10:12.846 "compare": false, 00:10:12.846 "compare_and_write": false, 00:10:12.846 "abort": true, 00:10:12.846 "seek_hole": false, 00:10:12.846 "seek_data": false, 00:10:12.846 "copy": true, 00:10:12.846 "nvme_iov_md": false 00:10:12.846 }, 00:10:12.846 "memory_domains": [ 00:10:12.846 { 00:10:12.846 "dma_device_id": "system", 00:10:12.846 "dma_device_type": 1 00:10:12.846 }, 00:10:12.846 { 00:10:12.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.846 "dma_device_type": 2 00:10:12.846 } 00:10:12.846 ], 00:10:12.846 "driver_specific": {} 00:10:12.846 } 00:10:12.846 ] 00:10:12.846 16:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.846 16:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:12.846 16:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:12.846 16:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.846 16:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.846 16:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:12.846 16:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.846 16:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:12.846 16:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.846 16:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.846 16:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.846 16:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.846 16:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.846 16:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.846 16:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.846 16:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.846 16:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.846 16:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.846 "name": "Existed_Raid", 00:10:12.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.846 "strip_size_kb": 64, 00:10:12.846 "state": "configuring", 00:10:12.846 "raid_level": "concat", 00:10:12.846 "superblock": false, 00:10:12.846 "num_base_bdevs": 2, 00:10:12.846 "num_base_bdevs_discovered": 1, 00:10:12.846 "num_base_bdevs_operational": 2, 00:10:12.846 "base_bdevs_list": [ 00:10:12.846 { 00:10:12.847 "name": "BaseBdev1", 00:10:12.847 "uuid": "ef99f504-2cad-405b-a82a-0b0bc24c85a6", 00:10:12.847 "is_configured": true, 00:10:12.847 "data_offset": 0, 00:10:12.847 "data_size": 65536 00:10:12.847 }, 00:10:12.847 { 00:10:12.847 "name": "BaseBdev2", 00:10:12.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.847 "is_configured": false, 00:10:12.847 "data_offset": 0, 00:10:12.847 "data_size": 0 00:10:12.847 } 00:10:12.847 ] 00:10:12.847 }' 00:10:12.847 16:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.847 16:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.105 16:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:13.105 16:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.105 16:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.105 [2024-12-06 16:25:54.911814] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:13.105 [2024-12-06 16:25:54.911875] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:10:13.105 16:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.105 16:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:13.105 16:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.105 16:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.105 [2024-12-06 16:25:54.923831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:13.105 [2024-12-06 16:25:54.925962] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:13.105 [2024-12-06 16:25:54.926064] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:13.105 16:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.105 16:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:13.105 16:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:13.105 16:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:13.105 16:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.105 16:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.105 16:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:13.105 16:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.105 16:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:13.105 16:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.105 16:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.105 16:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.105 16:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.105 16:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.105 16:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.105 16:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.105 16:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.365 16:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.365 16:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.365 "name": "Existed_Raid", 00:10:13.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.365 "strip_size_kb": 64, 00:10:13.365 "state": "configuring", 00:10:13.365 "raid_level": "concat", 00:10:13.365 "superblock": false, 00:10:13.365 "num_base_bdevs": 2, 00:10:13.365 "num_base_bdevs_discovered": 1, 00:10:13.365 "num_base_bdevs_operational": 2, 00:10:13.365 "base_bdevs_list": [ 00:10:13.365 { 00:10:13.365 "name": "BaseBdev1", 00:10:13.365 "uuid": "ef99f504-2cad-405b-a82a-0b0bc24c85a6", 00:10:13.365 "is_configured": true, 00:10:13.365 "data_offset": 0, 00:10:13.365 "data_size": 65536 00:10:13.365 }, 00:10:13.365 { 00:10:13.365 "name": "BaseBdev2", 00:10:13.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.365 "is_configured": false, 00:10:13.365 "data_offset": 0, 00:10:13.365 "data_size": 0 00:10:13.365 } 00:10:13.365 ] 00:10:13.365 }' 00:10:13.365 16:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.365 16:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.624 16:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:13.624 16:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.624 16:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.624 [2024-12-06 16:25:55.418118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:13.624 [2024-12-06 16:25:55.418307] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:13.624 [2024-12-06 16:25:55.418341] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:13.624 [2024-12-06 16:25:55.418672] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:10:13.624 [2024-12-06 16:25:55.418878] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:13.624 [2024-12-06 16:25:55.418931] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:10:13.624 [2024-12-06 16:25:55.419216] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:13.624 BaseBdev2 00:10:13.624 16:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.624 16:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:13.624 16:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:13.624 16:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:13.624 16:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:13.624 16:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:13.624 16:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:13.624 16:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:13.624 16:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.624 16:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.624 16:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.624 16:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:13.624 16:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.624 16:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.624 [ 00:10:13.624 { 00:10:13.624 "name": "BaseBdev2", 00:10:13.624 "aliases": [ 00:10:13.624 "b2c9fdbc-6ae2-4377-b83f-24c4663dd301" 00:10:13.624 ], 00:10:13.624 "product_name": "Malloc disk", 00:10:13.624 "block_size": 512, 00:10:13.624 "num_blocks": 65536, 00:10:13.624 "uuid": "b2c9fdbc-6ae2-4377-b83f-24c4663dd301", 00:10:13.624 "assigned_rate_limits": { 00:10:13.624 "rw_ios_per_sec": 0, 00:10:13.624 "rw_mbytes_per_sec": 0, 00:10:13.624 "r_mbytes_per_sec": 0, 00:10:13.624 "w_mbytes_per_sec": 0 00:10:13.624 }, 00:10:13.624 "claimed": true, 00:10:13.624 "claim_type": "exclusive_write", 00:10:13.624 "zoned": false, 00:10:13.624 "supported_io_types": { 00:10:13.624 "read": true, 00:10:13.624 "write": true, 00:10:13.624 "unmap": true, 00:10:13.624 "flush": true, 00:10:13.624 "reset": true, 00:10:13.624 "nvme_admin": false, 00:10:13.624 "nvme_io": false, 00:10:13.624 "nvme_io_md": false, 00:10:13.624 "write_zeroes": true, 00:10:13.624 "zcopy": true, 00:10:13.624 "get_zone_info": false, 00:10:13.624 "zone_management": false, 00:10:13.624 "zone_append": false, 00:10:13.624 "compare": false, 00:10:13.624 "compare_and_write": false, 00:10:13.624 "abort": true, 00:10:13.624 "seek_hole": false, 00:10:13.624 "seek_data": false, 00:10:13.624 "copy": true, 00:10:13.624 "nvme_iov_md": false 00:10:13.624 }, 00:10:13.624 "memory_domains": [ 00:10:13.624 { 00:10:13.624 "dma_device_id": "system", 00:10:13.624 "dma_device_type": 1 00:10:13.624 }, 00:10:13.624 { 00:10:13.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.624 "dma_device_type": 2 00:10:13.624 } 00:10:13.624 ], 00:10:13.624 "driver_specific": {} 00:10:13.624 } 00:10:13.624 ] 00:10:13.624 16:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.624 16:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:13.624 16:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:13.624 16:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:13.624 16:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:10:13.624 16:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.624 16:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:13.624 16:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:13.624 16:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.624 16:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:13.624 16:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.624 16:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.624 16:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.624 16:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.624 16:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.624 16:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.624 16:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.624 16:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.882 16:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.882 16:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.882 "name": "Existed_Raid", 00:10:13.882 "uuid": "f7527e17-cfea-4752-9581-56dbe069a0aa", 00:10:13.882 "strip_size_kb": 64, 00:10:13.882 "state": "online", 00:10:13.882 "raid_level": "concat", 00:10:13.882 "superblock": false, 00:10:13.882 "num_base_bdevs": 2, 00:10:13.882 "num_base_bdevs_discovered": 2, 00:10:13.882 "num_base_bdevs_operational": 2, 00:10:13.882 "base_bdevs_list": [ 00:10:13.882 { 00:10:13.882 "name": "BaseBdev1", 00:10:13.882 "uuid": "ef99f504-2cad-405b-a82a-0b0bc24c85a6", 00:10:13.882 "is_configured": true, 00:10:13.882 "data_offset": 0, 00:10:13.882 "data_size": 65536 00:10:13.882 }, 00:10:13.882 { 00:10:13.882 "name": "BaseBdev2", 00:10:13.882 "uuid": "b2c9fdbc-6ae2-4377-b83f-24c4663dd301", 00:10:13.882 "is_configured": true, 00:10:13.882 "data_offset": 0, 00:10:13.882 "data_size": 65536 00:10:13.882 } 00:10:13.882 ] 00:10:13.882 }' 00:10:13.882 16:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.882 16:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.141 16:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:14.141 16:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:14.141 16:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:14.141 16:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:14.141 16:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:14.141 16:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:14.141 16:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:14.141 16:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:14.141 16:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.141 16:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.141 [2024-12-06 16:25:55.865767] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:14.141 16:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.141 16:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:14.141 "name": "Existed_Raid", 00:10:14.141 "aliases": [ 00:10:14.141 "f7527e17-cfea-4752-9581-56dbe069a0aa" 00:10:14.141 ], 00:10:14.141 "product_name": "Raid Volume", 00:10:14.141 "block_size": 512, 00:10:14.141 "num_blocks": 131072, 00:10:14.141 "uuid": "f7527e17-cfea-4752-9581-56dbe069a0aa", 00:10:14.141 "assigned_rate_limits": { 00:10:14.141 "rw_ios_per_sec": 0, 00:10:14.141 "rw_mbytes_per_sec": 0, 00:10:14.141 "r_mbytes_per_sec": 0, 00:10:14.141 "w_mbytes_per_sec": 0 00:10:14.141 }, 00:10:14.141 "claimed": false, 00:10:14.141 "zoned": false, 00:10:14.141 "supported_io_types": { 00:10:14.141 "read": true, 00:10:14.141 "write": true, 00:10:14.141 "unmap": true, 00:10:14.141 "flush": true, 00:10:14.141 "reset": true, 00:10:14.141 "nvme_admin": false, 00:10:14.141 "nvme_io": false, 00:10:14.141 "nvme_io_md": false, 00:10:14.141 "write_zeroes": true, 00:10:14.141 "zcopy": false, 00:10:14.141 "get_zone_info": false, 00:10:14.141 "zone_management": false, 00:10:14.141 "zone_append": false, 00:10:14.141 "compare": false, 00:10:14.141 "compare_and_write": false, 00:10:14.141 "abort": false, 00:10:14.141 "seek_hole": false, 00:10:14.141 "seek_data": false, 00:10:14.141 "copy": false, 00:10:14.141 "nvme_iov_md": false 00:10:14.141 }, 00:10:14.141 "memory_domains": [ 00:10:14.141 { 00:10:14.141 "dma_device_id": "system", 00:10:14.141 "dma_device_type": 1 00:10:14.141 }, 00:10:14.141 { 00:10:14.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.141 "dma_device_type": 2 00:10:14.141 }, 00:10:14.141 { 00:10:14.141 "dma_device_id": "system", 00:10:14.141 "dma_device_type": 1 00:10:14.141 }, 00:10:14.141 { 00:10:14.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.141 "dma_device_type": 2 00:10:14.141 } 00:10:14.141 ], 00:10:14.141 "driver_specific": { 00:10:14.141 "raid": { 00:10:14.141 "uuid": "f7527e17-cfea-4752-9581-56dbe069a0aa", 00:10:14.141 "strip_size_kb": 64, 00:10:14.141 "state": "online", 00:10:14.141 "raid_level": "concat", 00:10:14.141 "superblock": false, 00:10:14.141 "num_base_bdevs": 2, 00:10:14.141 "num_base_bdevs_discovered": 2, 00:10:14.141 "num_base_bdevs_operational": 2, 00:10:14.141 "base_bdevs_list": [ 00:10:14.141 { 00:10:14.141 "name": "BaseBdev1", 00:10:14.141 "uuid": "ef99f504-2cad-405b-a82a-0b0bc24c85a6", 00:10:14.141 "is_configured": true, 00:10:14.141 "data_offset": 0, 00:10:14.141 "data_size": 65536 00:10:14.141 }, 00:10:14.141 { 00:10:14.141 "name": "BaseBdev2", 00:10:14.141 "uuid": "b2c9fdbc-6ae2-4377-b83f-24c4663dd301", 00:10:14.141 "is_configured": true, 00:10:14.141 "data_offset": 0, 00:10:14.141 "data_size": 65536 00:10:14.141 } 00:10:14.141 ] 00:10:14.141 } 00:10:14.141 } 00:10:14.141 }' 00:10:14.141 16:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:14.141 16:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:14.141 BaseBdev2' 00:10:14.141 16:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.400 16:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:14.400 16:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.400 16:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:14.400 16:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.400 16:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.400 16:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.400 16:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.400 16:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.400 16:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.400 16:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.400 16:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:14.400 16:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.400 16:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.400 16:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.400 16:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.400 16:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.400 16:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.400 16:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:14.400 16:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.400 16:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.400 [2024-12-06 16:25:56.101104] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:14.400 [2024-12-06 16:25:56.101188] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:14.400 [2024-12-06 16:25:56.101271] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:14.400 16:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.400 16:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:14.400 16:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:14.400 16:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:14.400 16:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:14.400 16:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:14.400 16:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:10:14.400 16:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.400 16:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:14.400 16:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:14.400 16:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.400 16:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:14.400 16:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.400 16:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.400 16:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.400 16:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.400 16:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.400 16:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.400 16:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.400 16:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.400 16:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.400 16:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.400 "name": "Existed_Raid", 00:10:14.400 "uuid": "f7527e17-cfea-4752-9581-56dbe069a0aa", 00:10:14.400 "strip_size_kb": 64, 00:10:14.400 "state": "offline", 00:10:14.400 "raid_level": "concat", 00:10:14.400 "superblock": false, 00:10:14.400 "num_base_bdevs": 2, 00:10:14.400 "num_base_bdevs_discovered": 1, 00:10:14.400 "num_base_bdevs_operational": 1, 00:10:14.400 "base_bdevs_list": [ 00:10:14.400 { 00:10:14.400 "name": null, 00:10:14.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.400 "is_configured": false, 00:10:14.400 "data_offset": 0, 00:10:14.400 "data_size": 65536 00:10:14.400 }, 00:10:14.400 { 00:10:14.400 "name": "BaseBdev2", 00:10:14.400 "uuid": "b2c9fdbc-6ae2-4377-b83f-24c4663dd301", 00:10:14.400 "is_configured": true, 00:10:14.400 "data_offset": 0, 00:10:14.400 "data_size": 65536 00:10:14.400 } 00:10:14.400 ] 00:10:14.400 }' 00:10:14.400 16:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.400 16:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.967 16:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:14.967 16:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:14.967 16:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.967 16:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.967 16:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.967 16:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:14.967 16:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.967 16:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:14.967 16:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:14.967 16:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:14.967 16:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.967 16:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.967 [2024-12-06 16:25:56.595894] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:14.967 [2024-12-06 16:25:56.596029] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:10:14.967 16:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.967 16:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:14.967 16:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:14.967 16:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.967 16:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:14.967 16:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.967 16:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.967 16:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.967 16:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:14.967 16:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:14.967 16:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:14.967 16:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73417 00:10:14.967 16:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73417 ']' 00:10:14.967 16:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73417 00:10:14.967 16:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:14.967 16:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:14.967 16:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73417 00:10:14.967 16:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:14.967 16:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:14.967 16:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73417' 00:10:14.967 killing process with pid 73417 00:10:14.967 16:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73417 00:10:14.967 [2024-12-06 16:25:56.701455] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:14.967 16:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73417 00:10:14.967 [2024-12-06 16:25:56.702517] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:15.227 16:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:15.227 00:10:15.227 real 0m3.885s 00:10:15.227 user 0m6.117s 00:10:15.227 sys 0m0.797s 00:10:15.227 16:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:15.227 ************************************ 00:10:15.227 END TEST raid_state_function_test 00:10:15.227 ************************************ 00:10:15.227 16:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.227 16:25:56 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:10:15.227 16:25:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:15.227 16:25:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:15.227 16:25:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:15.227 ************************************ 00:10:15.227 START TEST raid_state_function_test_sb 00:10:15.227 ************************************ 00:10:15.227 16:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:10:15.227 16:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:15.227 16:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:15.227 16:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:15.227 16:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:15.227 16:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:15.228 16:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:15.228 16:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:15.228 16:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:15.228 16:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:15.228 16:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:15.228 16:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:15.228 16:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:15.228 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:15.228 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:15.228 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:15.228 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:15.228 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:15.228 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:15.228 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:15.228 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:15.228 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:15.228 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:15.228 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:15.228 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73654 00:10:15.228 Process raid pid: 73654 00:10:15.228 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:15.228 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73654' 00:10:15.228 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73654 00:10:15.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.228 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73654 ']' 00:10:15.228 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.228 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:15.228 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.228 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:15.228 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.487 [2024-12-06 16:25:57.090632] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:10:15.487 [2024-12-06 16:25:57.090846] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:15.487 [2024-12-06 16:25:57.263930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.487 [2024-12-06 16:25:57.291725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.746 [2024-12-06 16:25:57.335574] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:15.746 [2024-12-06 16:25:57.335641] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:16.320 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:16.320 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:16.320 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:16.320 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.320 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.320 [2024-12-06 16:25:57.955245] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:16.320 [2024-12-06 16:25:57.955309] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:16.320 [2024-12-06 16:25:57.955319] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:16.320 [2024-12-06 16:25:57.955331] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:16.320 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.320 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:16.320 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.320 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.320 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:16.320 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.320 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:16.320 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.320 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.320 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.320 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.320 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.320 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.320 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.320 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.320 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.320 16:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.320 "name": "Existed_Raid", 00:10:16.320 "uuid": "6c4a4881-9983-422e-9c16-f90ea094db2a", 00:10:16.320 "strip_size_kb": 64, 00:10:16.320 "state": "configuring", 00:10:16.320 "raid_level": "concat", 00:10:16.320 "superblock": true, 00:10:16.320 "num_base_bdevs": 2, 00:10:16.320 "num_base_bdevs_discovered": 0, 00:10:16.320 "num_base_bdevs_operational": 2, 00:10:16.320 "base_bdevs_list": [ 00:10:16.320 { 00:10:16.320 "name": "BaseBdev1", 00:10:16.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.320 "is_configured": false, 00:10:16.320 "data_offset": 0, 00:10:16.320 "data_size": 0 00:10:16.320 }, 00:10:16.320 { 00:10:16.320 "name": "BaseBdev2", 00:10:16.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.320 "is_configured": false, 00:10:16.320 "data_offset": 0, 00:10:16.321 "data_size": 0 00:10:16.321 } 00:10:16.321 ] 00:10:16.321 }' 00:10:16.321 16:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.321 16:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.609 16:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:16.609 16:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.609 16:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.609 [2024-12-06 16:25:58.346471] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:16.609 [2024-12-06 16:25:58.346594] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:10:16.609 16:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.609 16:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:16.609 16:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.609 16:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.609 [2024-12-06 16:25:58.358454] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:16.609 [2024-12-06 16:25:58.358548] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:16.609 [2024-12-06 16:25:58.358598] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:16.609 [2024-12-06 16:25:58.358628] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:16.609 16:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.609 16:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:16.609 16:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.609 16:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.609 [2024-12-06 16:25:58.380054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:16.609 BaseBdev1 00:10:16.609 16:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.610 16:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:16.610 16:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:16.610 16:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:16.610 16:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:16.610 16:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:16.610 16:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:16.610 16:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:16.610 16:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.610 16:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.610 16:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.610 16:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:16.610 16:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.610 16:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.610 [ 00:10:16.610 { 00:10:16.610 "name": "BaseBdev1", 00:10:16.610 "aliases": [ 00:10:16.610 "9a10bab0-94cb-48b2-8b7e-f816eb112f8c" 00:10:16.610 ], 00:10:16.610 "product_name": "Malloc disk", 00:10:16.610 "block_size": 512, 00:10:16.610 "num_blocks": 65536, 00:10:16.610 "uuid": "9a10bab0-94cb-48b2-8b7e-f816eb112f8c", 00:10:16.610 "assigned_rate_limits": { 00:10:16.610 "rw_ios_per_sec": 0, 00:10:16.610 "rw_mbytes_per_sec": 0, 00:10:16.610 "r_mbytes_per_sec": 0, 00:10:16.610 "w_mbytes_per_sec": 0 00:10:16.610 }, 00:10:16.610 "claimed": true, 00:10:16.610 "claim_type": "exclusive_write", 00:10:16.610 "zoned": false, 00:10:16.610 "supported_io_types": { 00:10:16.610 "read": true, 00:10:16.610 "write": true, 00:10:16.610 "unmap": true, 00:10:16.610 "flush": true, 00:10:16.610 "reset": true, 00:10:16.610 "nvme_admin": false, 00:10:16.610 "nvme_io": false, 00:10:16.610 "nvme_io_md": false, 00:10:16.610 "write_zeroes": true, 00:10:16.610 "zcopy": true, 00:10:16.610 "get_zone_info": false, 00:10:16.610 "zone_management": false, 00:10:16.610 "zone_append": false, 00:10:16.610 "compare": false, 00:10:16.610 "compare_and_write": false, 00:10:16.610 "abort": true, 00:10:16.610 "seek_hole": false, 00:10:16.610 "seek_data": false, 00:10:16.610 "copy": true, 00:10:16.610 "nvme_iov_md": false 00:10:16.610 }, 00:10:16.610 "memory_domains": [ 00:10:16.610 { 00:10:16.610 "dma_device_id": "system", 00:10:16.610 "dma_device_type": 1 00:10:16.610 }, 00:10:16.610 { 00:10:16.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.610 "dma_device_type": 2 00:10:16.610 } 00:10:16.610 ], 00:10:16.610 "driver_specific": {} 00:10:16.610 } 00:10:16.610 ] 00:10:16.610 16:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.610 16:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:16.610 16:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:16.610 16:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.610 16:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.610 16:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:16.610 16:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.610 16:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:16.610 16:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.610 16:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.610 16:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.610 16:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.610 16:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.610 16:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.610 16:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.610 16:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.610 16:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.869 16:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.869 "name": "Existed_Raid", 00:10:16.869 "uuid": "7122dfbf-f9fa-4246-9edc-3c4edb962900", 00:10:16.869 "strip_size_kb": 64, 00:10:16.869 "state": "configuring", 00:10:16.869 "raid_level": "concat", 00:10:16.869 "superblock": true, 00:10:16.869 "num_base_bdevs": 2, 00:10:16.869 "num_base_bdevs_discovered": 1, 00:10:16.869 "num_base_bdevs_operational": 2, 00:10:16.869 "base_bdevs_list": [ 00:10:16.869 { 00:10:16.869 "name": "BaseBdev1", 00:10:16.869 "uuid": "9a10bab0-94cb-48b2-8b7e-f816eb112f8c", 00:10:16.869 "is_configured": true, 00:10:16.869 "data_offset": 2048, 00:10:16.869 "data_size": 63488 00:10:16.869 }, 00:10:16.869 { 00:10:16.869 "name": "BaseBdev2", 00:10:16.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.869 "is_configured": false, 00:10:16.869 "data_offset": 0, 00:10:16.869 "data_size": 0 00:10:16.869 } 00:10:16.869 ] 00:10:16.869 }' 00:10:16.869 16:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.869 16:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.127 16:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:17.127 16:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.127 16:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.127 [2024-12-06 16:25:58.859328] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:17.127 [2024-12-06 16:25:58.859401] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:10:17.127 16:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.127 16:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:17.127 16:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.127 16:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.127 [2024-12-06 16:25:58.867345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:17.127 [2024-12-06 16:25:58.869573] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:17.127 [2024-12-06 16:25:58.869650] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:17.127 16:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.127 16:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:17.127 16:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:17.127 16:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:17.127 16:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.127 16:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.127 16:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:17.127 16:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.127 16:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:17.127 16:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.127 16:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.127 16:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.127 16:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.127 16:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.127 16:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.127 16:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.128 16:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.128 16:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.128 16:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.128 "name": "Existed_Raid", 00:10:17.128 "uuid": "813f2537-65af-42fc-b19c-c22bb416d741", 00:10:17.128 "strip_size_kb": 64, 00:10:17.128 "state": "configuring", 00:10:17.128 "raid_level": "concat", 00:10:17.128 "superblock": true, 00:10:17.128 "num_base_bdevs": 2, 00:10:17.128 "num_base_bdevs_discovered": 1, 00:10:17.128 "num_base_bdevs_operational": 2, 00:10:17.128 "base_bdevs_list": [ 00:10:17.128 { 00:10:17.128 "name": "BaseBdev1", 00:10:17.128 "uuid": "9a10bab0-94cb-48b2-8b7e-f816eb112f8c", 00:10:17.128 "is_configured": true, 00:10:17.128 "data_offset": 2048, 00:10:17.128 "data_size": 63488 00:10:17.128 }, 00:10:17.128 { 00:10:17.128 "name": "BaseBdev2", 00:10:17.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.128 "is_configured": false, 00:10:17.128 "data_offset": 0, 00:10:17.128 "data_size": 0 00:10:17.128 } 00:10:17.128 ] 00:10:17.128 }' 00:10:17.128 16:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.128 16:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.696 16:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:17.696 16:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.696 16:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.696 [2024-12-06 16:25:59.330053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:17.697 [2024-12-06 16:25:59.330296] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:17.697 [2024-12-06 16:25:59.330322] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:17.697 BaseBdev2 00:10:17.697 [2024-12-06 16:25:59.330607] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:10:17.697 [2024-12-06 16:25:59.330763] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:17.697 [2024-12-06 16:25:59.330780] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:10:17.697 [2024-12-06 16:25:59.330916] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:17.697 16:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.697 16:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:17.697 16:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:17.697 16:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:17.697 16:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:17.697 16:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:17.697 16:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:17.697 16:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:17.697 16:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.697 16:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.697 16:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.697 16:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:17.697 16:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.697 16:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.697 [ 00:10:17.697 { 00:10:17.697 "name": "BaseBdev2", 00:10:17.697 "aliases": [ 00:10:17.697 "9ab4eceb-f310-4c2a-abc9-fc2e6d8f724d" 00:10:17.697 ], 00:10:17.697 "product_name": "Malloc disk", 00:10:17.697 "block_size": 512, 00:10:17.697 "num_blocks": 65536, 00:10:17.697 "uuid": "9ab4eceb-f310-4c2a-abc9-fc2e6d8f724d", 00:10:17.697 "assigned_rate_limits": { 00:10:17.697 "rw_ios_per_sec": 0, 00:10:17.697 "rw_mbytes_per_sec": 0, 00:10:17.697 "r_mbytes_per_sec": 0, 00:10:17.697 "w_mbytes_per_sec": 0 00:10:17.697 }, 00:10:17.697 "claimed": true, 00:10:17.697 "claim_type": "exclusive_write", 00:10:17.697 "zoned": false, 00:10:17.697 "supported_io_types": { 00:10:17.697 "read": true, 00:10:17.697 "write": true, 00:10:17.697 "unmap": true, 00:10:17.697 "flush": true, 00:10:17.697 "reset": true, 00:10:17.697 "nvme_admin": false, 00:10:17.697 "nvme_io": false, 00:10:17.697 "nvme_io_md": false, 00:10:17.697 "write_zeroes": true, 00:10:17.697 "zcopy": true, 00:10:17.697 "get_zone_info": false, 00:10:17.697 "zone_management": false, 00:10:17.697 "zone_append": false, 00:10:17.697 "compare": false, 00:10:17.697 "compare_and_write": false, 00:10:17.697 "abort": true, 00:10:17.697 "seek_hole": false, 00:10:17.697 "seek_data": false, 00:10:17.697 "copy": true, 00:10:17.697 "nvme_iov_md": false 00:10:17.697 }, 00:10:17.697 "memory_domains": [ 00:10:17.697 { 00:10:17.697 "dma_device_id": "system", 00:10:17.697 "dma_device_type": 1 00:10:17.697 }, 00:10:17.697 { 00:10:17.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.697 "dma_device_type": 2 00:10:17.697 } 00:10:17.697 ], 00:10:17.697 "driver_specific": {} 00:10:17.697 } 00:10:17.697 ] 00:10:17.697 16:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.697 16:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:17.697 16:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:17.697 16:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:17.697 16:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:10:17.697 16:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.697 16:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:17.697 16:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:17.697 16:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.697 16:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:17.697 16:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.697 16:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.697 16:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.697 16:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.697 16:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.697 16:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.697 16:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.697 16:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.697 16:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.697 16:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.697 "name": "Existed_Raid", 00:10:17.697 "uuid": "813f2537-65af-42fc-b19c-c22bb416d741", 00:10:17.697 "strip_size_kb": 64, 00:10:17.697 "state": "online", 00:10:17.697 "raid_level": "concat", 00:10:17.697 "superblock": true, 00:10:17.697 "num_base_bdevs": 2, 00:10:17.697 "num_base_bdevs_discovered": 2, 00:10:17.697 "num_base_bdevs_operational": 2, 00:10:17.697 "base_bdevs_list": [ 00:10:17.697 { 00:10:17.697 "name": "BaseBdev1", 00:10:17.697 "uuid": "9a10bab0-94cb-48b2-8b7e-f816eb112f8c", 00:10:17.697 "is_configured": true, 00:10:17.697 "data_offset": 2048, 00:10:17.697 "data_size": 63488 00:10:17.697 }, 00:10:17.697 { 00:10:17.697 "name": "BaseBdev2", 00:10:17.697 "uuid": "9ab4eceb-f310-4c2a-abc9-fc2e6d8f724d", 00:10:17.697 "is_configured": true, 00:10:17.697 "data_offset": 2048, 00:10:17.697 "data_size": 63488 00:10:17.697 } 00:10:17.697 ] 00:10:17.697 }' 00:10:17.697 16:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.697 16:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.265 16:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:18.265 16:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:18.265 16:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:18.265 16:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:18.265 16:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:18.265 16:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:18.265 16:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:18.265 16:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:18.265 16:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.265 16:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.265 [2024-12-06 16:25:59.837657] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:18.265 16:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.265 16:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:18.265 "name": "Existed_Raid", 00:10:18.265 "aliases": [ 00:10:18.265 "813f2537-65af-42fc-b19c-c22bb416d741" 00:10:18.265 ], 00:10:18.265 "product_name": "Raid Volume", 00:10:18.265 "block_size": 512, 00:10:18.265 "num_blocks": 126976, 00:10:18.265 "uuid": "813f2537-65af-42fc-b19c-c22bb416d741", 00:10:18.265 "assigned_rate_limits": { 00:10:18.265 "rw_ios_per_sec": 0, 00:10:18.265 "rw_mbytes_per_sec": 0, 00:10:18.265 "r_mbytes_per_sec": 0, 00:10:18.265 "w_mbytes_per_sec": 0 00:10:18.265 }, 00:10:18.265 "claimed": false, 00:10:18.265 "zoned": false, 00:10:18.265 "supported_io_types": { 00:10:18.266 "read": true, 00:10:18.266 "write": true, 00:10:18.266 "unmap": true, 00:10:18.266 "flush": true, 00:10:18.266 "reset": true, 00:10:18.266 "nvme_admin": false, 00:10:18.266 "nvme_io": false, 00:10:18.266 "nvme_io_md": false, 00:10:18.266 "write_zeroes": true, 00:10:18.266 "zcopy": false, 00:10:18.266 "get_zone_info": false, 00:10:18.266 "zone_management": false, 00:10:18.266 "zone_append": false, 00:10:18.266 "compare": false, 00:10:18.266 "compare_and_write": false, 00:10:18.266 "abort": false, 00:10:18.266 "seek_hole": false, 00:10:18.266 "seek_data": false, 00:10:18.266 "copy": false, 00:10:18.266 "nvme_iov_md": false 00:10:18.266 }, 00:10:18.266 "memory_domains": [ 00:10:18.266 { 00:10:18.266 "dma_device_id": "system", 00:10:18.266 "dma_device_type": 1 00:10:18.266 }, 00:10:18.266 { 00:10:18.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.266 "dma_device_type": 2 00:10:18.266 }, 00:10:18.266 { 00:10:18.266 "dma_device_id": "system", 00:10:18.266 "dma_device_type": 1 00:10:18.266 }, 00:10:18.266 { 00:10:18.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.266 "dma_device_type": 2 00:10:18.266 } 00:10:18.266 ], 00:10:18.266 "driver_specific": { 00:10:18.266 "raid": { 00:10:18.266 "uuid": "813f2537-65af-42fc-b19c-c22bb416d741", 00:10:18.266 "strip_size_kb": 64, 00:10:18.266 "state": "online", 00:10:18.266 "raid_level": "concat", 00:10:18.266 "superblock": true, 00:10:18.266 "num_base_bdevs": 2, 00:10:18.266 "num_base_bdevs_discovered": 2, 00:10:18.266 "num_base_bdevs_operational": 2, 00:10:18.266 "base_bdevs_list": [ 00:10:18.266 { 00:10:18.266 "name": "BaseBdev1", 00:10:18.266 "uuid": "9a10bab0-94cb-48b2-8b7e-f816eb112f8c", 00:10:18.266 "is_configured": true, 00:10:18.266 "data_offset": 2048, 00:10:18.266 "data_size": 63488 00:10:18.266 }, 00:10:18.266 { 00:10:18.266 "name": "BaseBdev2", 00:10:18.266 "uuid": "9ab4eceb-f310-4c2a-abc9-fc2e6d8f724d", 00:10:18.266 "is_configured": true, 00:10:18.266 "data_offset": 2048, 00:10:18.266 "data_size": 63488 00:10:18.266 } 00:10:18.266 ] 00:10:18.266 } 00:10:18.266 } 00:10:18.266 }' 00:10:18.266 16:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:18.266 16:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:18.266 BaseBdev2' 00:10:18.266 16:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.266 16:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:18.266 16:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:18.266 16:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.266 16:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:18.266 16:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.266 16:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.266 16:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.266 16:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:18.266 16:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:18.266 16:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:18.266 16:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:18.266 16:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.266 16:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.266 16:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.266 16:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.266 16:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:18.266 16:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:18.266 16:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:18.266 16:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.266 16:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.266 [2024-12-06 16:26:00.048970] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:18.266 [2024-12-06 16:26:00.049006] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:18.266 [2024-12-06 16:26:00.049064] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:18.266 16:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.266 16:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:18.266 16:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:18.266 16:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:18.266 16:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:18.266 16:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:18.266 16:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:10:18.266 16:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.266 16:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:18.266 16:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:18.266 16:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.266 16:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:18.266 16:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.266 16:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.266 16:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.266 16:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.266 16:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.266 16:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.266 16:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.266 16:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.266 16:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.525 16:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.525 "name": "Existed_Raid", 00:10:18.525 "uuid": "813f2537-65af-42fc-b19c-c22bb416d741", 00:10:18.525 "strip_size_kb": 64, 00:10:18.525 "state": "offline", 00:10:18.525 "raid_level": "concat", 00:10:18.525 "superblock": true, 00:10:18.525 "num_base_bdevs": 2, 00:10:18.525 "num_base_bdevs_discovered": 1, 00:10:18.525 "num_base_bdevs_operational": 1, 00:10:18.525 "base_bdevs_list": [ 00:10:18.525 { 00:10:18.525 "name": null, 00:10:18.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.525 "is_configured": false, 00:10:18.525 "data_offset": 0, 00:10:18.525 "data_size": 63488 00:10:18.525 }, 00:10:18.525 { 00:10:18.525 "name": "BaseBdev2", 00:10:18.525 "uuid": "9ab4eceb-f310-4c2a-abc9-fc2e6d8f724d", 00:10:18.525 "is_configured": true, 00:10:18.525 "data_offset": 2048, 00:10:18.525 "data_size": 63488 00:10:18.525 } 00:10:18.525 ] 00:10:18.525 }' 00:10:18.525 16:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.525 16:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.784 16:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:18.784 16:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:18.784 16:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.784 16:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:18.784 16:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.784 16:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.784 16:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.784 16:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:18.784 16:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:18.784 16:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:18.784 16:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.784 16:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.784 [2024-12-06 16:26:00.555799] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:18.784 [2024-12-06 16:26:00.555860] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:10:18.784 16:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.784 16:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:18.784 16:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:18.784 16:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.784 16:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.784 16:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.784 16:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:18.784 16:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.043 16:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:19.043 16:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:19.043 16:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:19.043 16:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73654 00:10:19.043 16:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73654 ']' 00:10:19.043 16:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 73654 00:10:19.043 16:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:19.043 16:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:19.043 16:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73654 00:10:19.043 killing process with pid 73654 00:10:19.043 16:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:19.043 16:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:19.043 16:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73654' 00:10:19.043 16:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 73654 00:10:19.043 [2024-12-06 16:26:00.666258] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:19.043 16:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 73654 00:10:19.043 [2024-12-06 16:26:00.667276] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:19.304 16:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:19.304 00:10:19.304 real 0m3.899s 00:10:19.304 user 0m6.155s 00:10:19.304 sys 0m0.796s 00:10:19.304 ************************************ 00:10:19.304 END TEST raid_state_function_test_sb 00:10:19.304 ************************************ 00:10:19.304 16:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.304 16:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.304 16:26:00 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:10:19.304 16:26:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:19.304 16:26:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:19.304 16:26:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:19.304 ************************************ 00:10:19.304 START TEST raid_superblock_test 00:10:19.304 ************************************ 00:10:19.304 16:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:10:19.304 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:19.304 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:10:19.304 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:19.304 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:19.304 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:19.304 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:19.304 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:19.304 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:19.304 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:19.304 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:19.304 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:19.304 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:19.304 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:19.304 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:19.304 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:19.304 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:19.304 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=73890 00:10:19.304 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:19.304 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 73890 00:10:19.304 16:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 73890 ']' 00:10:19.304 16:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.304 16:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:19.304 16:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.304 16:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:19.304 16:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.304 [2024-12-06 16:26:01.041651] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:10:19.304 [2024-12-06 16:26:01.041891] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73890 ] 00:10:19.563 [2024-12-06 16:26:01.213236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.563 [2024-12-06 16:26:01.241364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.563 [2024-12-06 16:26:01.283643] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:19.563 [2024-12-06 16:26:01.283770] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:20.129 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:20.129 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:20.129 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:20.129 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:20.129 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:20.129 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:20.129 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:20.129 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:20.129 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:20.129 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:20.129 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:20.129 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.129 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.129 malloc1 00:10:20.130 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.130 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:20.130 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.130 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.130 [2024-12-06 16:26:01.900791] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:20.130 [2024-12-06 16:26:01.900854] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.130 [2024-12-06 16:26:01.900875] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:20.130 [2024-12-06 16:26:01.900889] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.130 [2024-12-06 16:26:01.903111] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.130 [2024-12-06 16:26:01.903158] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:20.130 pt1 00:10:20.130 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.130 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:20.130 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:20.130 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:20.130 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:20.130 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:20.130 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:20.130 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:20.130 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:20.130 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:20.130 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.130 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.130 malloc2 00:10:20.130 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.130 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:20.130 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.130 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.130 [2024-12-06 16:26:01.921391] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:20.130 [2024-12-06 16:26:01.921447] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.130 [2024-12-06 16:26:01.921464] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:20.130 [2024-12-06 16:26:01.921475] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.130 [2024-12-06 16:26:01.923688] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.130 [2024-12-06 16:26:01.923725] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:20.130 pt2 00:10:20.130 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.130 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:20.130 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:20.130 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:10:20.130 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.130 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.130 [2024-12-06 16:26:01.929421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:20.130 [2024-12-06 16:26:01.931301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:20.130 [2024-12-06 16:26:01.931436] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:10:20.130 [2024-12-06 16:26:01.931451] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:20.130 [2024-12-06 16:26:01.931695] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:10:20.130 [2024-12-06 16:26:01.931845] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:10:20.130 [2024-12-06 16:26:01.931854] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:10:20.130 [2024-12-06 16:26:01.931955] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.130 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.130 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:20.130 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:20.130 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:20.130 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:20.130 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.130 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:20.130 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.130 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.130 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.130 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.130 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.130 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:20.130 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.130 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.130 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.389 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.389 "name": "raid_bdev1", 00:10:20.389 "uuid": "11f7c22e-d57f-44c1-9d80-89806b160516", 00:10:20.389 "strip_size_kb": 64, 00:10:20.389 "state": "online", 00:10:20.389 "raid_level": "concat", 00:10:20.389 "superblock": true, 00:10:20.389 "num_base_bdevs": 2, 00:10:20.389 "num_base_bdevs_discovered": 2, 00:10:20.389 "num_base_bdevs_operational": 2, 00:10:20.389 "base_bdevs_list": [ 00:10:20.389 { 00:10:20.389 "name": "pt1", 00:10:20.389 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:20.389 "is_configured": true, 00:10:20.389 "data_offset": 2048, 00:10:20.389 "data_size": 63488 00:10:20.389 }, 00:10:20.389 { 00:10:20.389 "name": "pt2", 00:10:20.389 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:20.389 "is_configured": true, 00:10:20.389 "data_offset": 2048, 00:10:20.389 "data_size": 63488 00:10:20.389 } 00:10:20.389 ] 00:10:20.389 }' 00:10:20.389 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.389 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.648 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:20.648 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:20.648 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:20.648 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:20.648 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:20.648 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:20.648 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:20.648 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.648 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.648 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:20.648 [2024-12-06 16:26:02.385011] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:20.648 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.648 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:20.648 "name": "raid_bdev1", 00:10:20.648 "aliases": [ 00:10:20.648 "11f7c22e-d57f-44c1-9d80-89806b160516" 00:10:20.648 ], 00:10:20.648 "product_name": "Raid Volume", 00:10:20.648 "block_size": 512, 00:10:20.648 "num_blocks": 126976, 00:10:20.648 "uuid": "11f7c22e-d57f-44c1-9d80-89806b160516", 00:10:20.648 "assigned_rate_limits": { 00:10:20.648 "rw_ios_per_sec": 0, 00:10:20.648 "rw_mbytes_per_sec": 0, 00:10:20.648 "r_mbytes_per_sec": 0, 00:10:20.648 "w_mbytes_per_sec": 0 00:10:20.648 }, 00:10:20.648 "claimed": false, 00:10:20.648 "zoned": false, 00:10:20.648 "supported_io_types": { 00:10:20.648 "read": true, 00:10:20.648 "write": true, 00:10:20.648 "unmap": true, 00:10:20.648 "flush": true, 00:10:20.648 "reset": true, 00:10:20.648 "nvme_admin": false, 00:10:20.648 "nvme_io": false, 00:10:20.648 "nvme_io_md": false, 00:10:20.648 "write_zeroes": true, 00:10:20.648 "zcopy": false, 00:10:20.648 "get_zone_info": false, 00:10:20.648 "zone_management": false, 00:10:20.648 "zone_append": false, 00:10:20.648 "compare": false, 00:10:20.648 "compare_and_write": false, 00:10:20.648 "abort": false, 00:10:20.648 "seek_hole": false, 00:10:20.648 "seek_data": false, 00:10:20.648 "copy": false, 00:10:20.648 "nvme_iov_md": false 00:10:20.648 }, 00:10:20.648 "memory_domains": [ 00:10:20.648 { 00:10:20.648 "dma_device_id": "system", 00:10:20.648 "dma_device_type": 1 00:10:20.648 }, 00:10:20.648 { 00:10:20.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.648 "dma_device_type": 2 00:10:20.648 }, 00:10:20.648 { 00:10:20.648 "dma_device_id": "system", 00:10:20.648 "dma_device_type": 1 00:10:20.649 }, 00:10:20.649 { 00:10:20.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.649 "dma_device_type": 2 00:10:20.649 } 00:10:20.649 ], 00:10:20.649 "driver_specific": { 00:10:20.649 "raid": { 00:10:20.649 "uuid": "11f7c22e-d57f-44c1-9d80-89806b160516", 00:10:20.649 "strip_size_kb": 64, 00:10:20.649 "state": "online", 00:10:20.649 "raid_level": "concat", 00:10:20.649 "superblock": true, 00:10:20.649 "num_base_bdevs": 2, 00:10:20.649 "num_base_bdevs_discovered": 2, 00:10:20.649 "num_base_bdevs_operational": 2, 00:10:20.649 "base_bdevs_list": [ 00:10:20.649 { 00:10:20.649 "name": "pt1", 00:10:20.649 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:20.649 "is_configured": true, 00:10:20.649 "data_offset": 2048, 00:10:20.649 "data_size": 63488 00:10:20.649 }, 00:10:20.649 { 00:10:20.649 "name": "pt2", 00:10:20.649 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:20.649 "is_configured": true, 00:10:20.649 "data_offset": 2048, 00:10:20.649 "data_size": 63488 00:10:20.649 } 00:10:20.649 ] 00:10:20.649 } 00:10:20.649 } 00:10:20.649 }' 00:10:20.649 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:20.649 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:20.649 pt2' 00:10:20.649 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.909 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:20.909 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.909 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:20.909 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.909 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.909 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.909 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.909 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.909 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.909 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.909 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:20.909 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.909 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.909 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.909 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.909 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.909 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.909 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:20.909 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.909 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.909 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:20.909 [2024-12-06 16:26:02.612571] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:20.909 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.909 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=11f7c22e-d57f-44c1-9d80-89806b160516 00:10:20.909 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 11f7c22e-d57f-44c1-9d80-89806b160516 ']' 00:10:20.909 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:20.909 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.909 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.909 [2024-12-06 16:26:02.660259] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:20.909 [2024-12-06 16:26:02.660295] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:20.909 [2024-12-06 16:26:02.660387] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:20.909 [2024-12-06 16:26:02.660442] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:20.909 [2024-12-06 16:26:02.660453] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:10:20.909 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.909 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.909 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.909 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:20.909 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.909 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.909 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:20.909 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:20.909 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:20.909 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:20.909 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.909 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.909 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.909 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:20.909 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:20.909 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.909 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.909 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.909 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:20.909 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:20.909 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.909 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.169 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.169 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:21.169 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:21.169 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:21.169 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:21.169 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:21.169 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:21.169 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:21.169 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:21.169 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:21.169 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.169 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.169 [2024-12-06 16:26:02.784196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:21.169 [2024-12-06 16:26:02.786172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:21.169 [2024-12-06 16:26:02.786262] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:21.169 [2024-12-06 16:26:02.786314] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:21.169 [2024-12-06 16:26:02.786332] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:21.169 [2024-12-06 16:26:02.786343] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:10:21.169 request: 00:10:21.169 { 00:10:21.169 "name": "raid_bdev1", 00:10:21.169 "raid_level": "concat", 00:10:21.169 "base_bdevs": [ 00:10:21.169 "malloc1", 00:10:21.169 "malloc2" 00:10:21.169 ], 00:10:21.169 "strip_size_kb": 64, 00:10:21.169 "superblock": false, 00:10:21.169 "method": "bdev_raid_create", 00:10:21.169 "req_id": 1 00:10:21.169 } 00:10:21.169 Got JSON-RPC error response 00:10:21.169 response: 00:10:21.169 { 00:10:21.169 "code": -17, 00:10:21.169 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:21.169 } 00:10:21.169 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:21.169 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:21.169 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:21.169 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:21.169 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:21.169 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.169 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:21.169 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.169 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.169 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.169 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:21.169 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:21.169 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:21.169 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.169 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.169 [2024-12-06 16:26:02.843989] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:21.169 [2024-12-06 16:26:02.844051] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.169 [2024-12-06 16:26:02.844073] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:21.169 [2024-12-06 16:26:02.844082] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.169 [2024-12-06 16:26:02.846498] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.169 pt1 00:10:21.169 [2024-12-06 16:26:02.846578] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:21.169 [2024-12-06 16:26:02.846664] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:21.169 [2024-12-06 16:26:02.846701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:21.169 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.169 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:10:21.169 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:21.169 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.169 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:21.169 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.169 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:21.169 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.169 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.169 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.169 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.169 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.169 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.169 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.169 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.169 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.169 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.169 "name": "raid_bdev1", 00:10:21.169 "uuid": "11f7c22e-d57f-44c1-9d80-89806b160516", 00:10:21.169 "strip_size_kb": 64, 00:10:21.169 "state": "configuring", 00:10:21.169 "raid_level": "concat", 00:10:21.169 "superblock": true, 00:10:21.169 "num_base_bdevs": 2, 00:10:21.169 "num_base_bdevs_discovered": 1, 00:10:21.169 "num_base_bdevs_operational": 2, 00:10:21.169 "base_bdevs_list": [ 00:10:21.169 { 00:10:21.169 "name": "pt1", 00:10:21.169 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:21.169 "is_configured": true, 00:10:21.169 "data_offset": 2048, 00:10:21.169 "data_size": 63488 00:10:21.169 }, 00:10:21.169 { 00:10:21.169 "name": null, 00:10:21.169 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:21.169 "is_configured": false, 00:10:21.169 "data_offset": 2048, 00:10:21.169 "data_size": 63488 00:10:21.169 } 00:10:21.169 ] 00:10:21.169 }' 00:10:21.169 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.169 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.429 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:10:21.429 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:21.429 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:21.429 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:21.429 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.429 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.429 [2024-12-06 16:26:03.235371] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:21.429 [2024-12-06 16:26:03.235501] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.429 [2024-12-06 16:26:03.235531] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:10:21.429 [2024-12-06 16:26:03.235541] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.429 [2024-12-06 16:26:03.235975] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.429 [2024-12-06 16:26:03.235993] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:21.429 [2024-12-06 16:26:03.236073] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:21.429 [2024-12-06 16:26:03.236102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:21.429 [2024-12-06 16:26:03.236230] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:21.429 [2024-12-06 16:26:03.236241] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:21.429 [2024-12-06 16:26:03.236495] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:10:21.429 [2024-12-06 16:26:03.236617] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:21.429 [2024-12-06 16:26:03.236631] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:10:21.429 [2024-12-06 16:26:03.236738] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:21.429 pt2 00:10:21.429 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.429 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:21.429 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:21.429 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:21.429 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:21.429 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:21.429 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:21.429 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.429 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:21.429 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.429 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.429 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.429 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.429 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.429 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.429 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.429 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.429 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.702 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.702 "name": "raid_bdev1", 00:10:21.702 "uuid": "11f7c22e-d57f-44c1-9d80-89806b160516", 00:10:21.702 "strip_size_kb": 64, 00:10:21.702 "state": "online", 00:10:21.702 "raid_level": "concat", 00:10:21.702 "superblock": true, 00:10:21.702 "num_base_bdevs": 2, 00:10:21.702 "num_base_bdevs_discovered": 2, 00:10:21.702 "num_base_bdevs_operational": 2, 00:10:21.702 "base_bdevs_list": [ 00:10:21.702 { 00:10:21.702 "name": "pt1", 00:10:21.702 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:21.702 "is_configured": true, 00:10:21.702 "data_offset": 2048, 00:10:21.702 "data_size": 63488 00:10:21.702 }, 00:10:21.702 { 00:10:21.702 "name": "pt2", 00:10:21.702 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:21.702 "is_configured": true, 00:10:21.702 "data_offset": 2048, 00:10:21.702 "data_size": 63488 00:10:21.702 } 00:10:21.702 ] 00:10:21.702 }' 00:10:21.702 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.702 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.978 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:21.978 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:21.978 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:21.978 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:21.978 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:21.978 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:21.978 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:21.978 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:21.978 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.978 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.978 [2024-12-06 16:26:03.670924] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:21.978 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.978 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:21.978 "name": "raid_bdev1", 00:10:21.978 "aliases": [ 00:10:21.978 "11f7c22e-d57f-44c1-9d80-89806b160516" 00:10:21.978 ], 00:10:21.978 "product_name": "Raid Volume", 00:10:21.978 "block_size": 512, 00:10:21.978 "num_blocks": 126976, 00:10:21.978 "uuid": "11f7c22e-d57f-44c1-9d80-89806b160516", 00:10:21.978 "assigned_rate_limits": { 00:10:21.978 "rw_ios_per_sec": 0, 00:10:21.978 "rw_mbytes_per_sec": 0, 00:10:21.978 "r_mbytes_per_sec": 0, 00:10:21.978 "w_mbytes_per_sec": 0 00:10:21.978 }, 00:10:21.978 "claimed": false, 00:10:21.978 "zoned": false, 00:10:21.978 "supported_io_types": { 00:10:21.978 "read": true, 00:10:21.978 "write": true, 00:10:21.978 "unmap": true, 00:10:21.978 "flush": true, 00:10:21.978 "reset": true, 00:10:21.978 "nvme_admin": false, 00:10:21.978 "nvme_io": false, 00:10:21.978 "nvme_io_md": false, 00:10:21.978 "write_zeroes": true, 00:10:21.978 "zcopy": false, 00:10:21.978 "get_zone_info": false, 00:10:21.978 "zone_management": false, 00:10:21.978 "zone_append": false, 00:10:21.978 "compare": false, 00:10:21.978 "compare_and_write": false, 00:10:21.978 "abort": false, 00:10:21.978 "seek_hole": false, 00:10:21.978 "seek_data": false, 00:10:21.978 "copy": false, 00:10:21.978 "nvme_iov_md": false 00:10:21.978 }, 00:10:21.978 "memory_domains": [ 00:10:21.978 { 00:10:21.978 "dma_device_id": "system", 00:10:21.978 "dma_device_type": 1 00:10:21.978 }, 00:10:21.978 { 00:10:21.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.978 "dma_device_type": 2 00:10:21.978 }, 00:10:21.978 { 00:10:21.978 "dma_device_id": "system", 00:10:21.978 "dma_device_type": 1 00:10:21.978 }, 00:10:21.978 { 00:10:21.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.978 "dma_device_type": 2 00:10:21.978 } 00:10:21.978 ], 00:10:21.978 "driver_specific": { 00:10:21.978 "raid": { 00:10:21.978 "uuid": "11f7c22e-d57f-44c1-9d80-89806b160516", 00:10:21.978 "strip_size_kb": 64, 00:10:21.978 "state": "online", 00:10:21.978 "raid_level": "concat", 00:10:21.978 "superblock": true, 00:10:21.978 "num_base_bdevs": 2, 00:10:21.978 "num_base_bdevs_discovered": 2, 00:10:21.978 "num_base_bdevs_operational": 2, 00:10:21.978 "base_bdevs_list": [ 00:10:21.978 { 00:10:21.978 "name": "pt1", 00:10:21.978 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:21.978 "is_configured": true, 00:10:21.978 "data_offset": 2048, 00:10:21.978 "data_size": 63488 00:10:21.978 }, 00:10:21.978 { 00:10:21.978 "name": "pt2", 00:10:21.978 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:21.978 "is_configured": true, 00:10:21.978 "data_offset": 2048, 00:10:21.978 "data_size": 63488 00:10:21.978 } 00:10:21.978 ] 00:10:21.978 } 00:10:21.978 } 00:10:21.978 }' 00:10:21.978 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:21.978 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:21.978 pt2' 00:10:21.978 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.978 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:21.978 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.978 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.978 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:21.978 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.978 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.258 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.258 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:22.258 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:22.258 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.258 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.258 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:22.258 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.258 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.258 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.258 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:22.258 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:22.258 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:22.258 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.258 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.258 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:22.258 [2024-12-06 16:26:03.894558] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:22.258 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.259 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 11f7c22e-d57f-44c1-9d80-89806b160516 '!=' 11f7c22e-d57f-44c1-9d80-89806b160516 ']' 00:10:22.259 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:10:22.259 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:22.259 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:22.259 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 73890 00:10:22.259 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 73890 ']' 00:10:22.259 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 73890 00:10:22.259 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:22.259 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:22.259 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73890 00:10:22.259 killing process with pid 73890 00:10:22.259 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:22.259 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:22.259 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73890' 00:10:22.259 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 73890 00:10:22.259 [2024-12-06 16:26:03.980932] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:22.259 [2024-12-06 16:26:03.981024] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:22.259 [2024-12-06 16:26:03.981084] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:22.259 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 73890 00:10:22.259 [2024-12-06 16:26:03.981094] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:10:22.259 [2024-12-06 16:26:04.004602] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:22.518 16:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:22.518 ************************************ 00:10:22.518 END TEST raid_superblock_test 00:10:22.518 ************************************ 00:10:22.518 00:10:22.518 real 0m3.268s 00:10:22.518 user 0m5.043s 00:10:22.518 sys 0m0.697s 00:10:22.518 16:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:22.518 16:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.518 16:26:04 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:10:22.518 16:26:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:22.518 16:26:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:22.518 16:26:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:22.518 ************************************ 00:10:22.518 START TEST raid_read_error_test 00:10:22.518 ************************************ 00:10:22.518 16:26:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:10:22.518 16:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:22.518 16:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:22.518 16:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:22.518 16:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:22.518 16:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:22.518 16:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:22.518 16:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:22.518 16:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:22.518 16:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:22.518 16:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:22.518 16:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:22.518 16:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:22.518 16:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:22.518 16:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:22.518 16:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:22.518 16:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:22.518 16:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:22.518 16:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:22.518 16:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:22.518 16:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:22.518 16:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:22.518 16:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:22.518 16:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.N9l4V6Qn2t 00:10:22.518 16:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74091 00:10:22.518 16:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74091 00:10:22.518 16:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:22.518 16:26:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 74091 ']' 00:10:22.518 16:26:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.518 16:26:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:22.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.518 16:26:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.518 16:26:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:22.518 16:26:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.778 [2024-12-06 16:26:04.397167] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:10:22.778 [2024-12-06 16:26:04.397315] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74091 ] 00:10:22.778 [2024-12-06 16:26:04.567410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.778 [2024-12-06 16:26:04.595871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.037 [2024-12-06 16:26:04.638800] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:23.037 [2024-12-06 16:26:04.638841] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:23.604 16:26:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:23.604 16:26:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:23.604 16:26:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:23.604 16:26:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:23.604 16:26:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.604 16:26:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.604 BaseBdev1_malloc 00:10:23.604 16:26:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.604 16:26:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:23.604 16:26:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.604 16:26:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.604 true 00:10:23.604 16:26:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.604 16:26:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:23.604 16:26:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.604 16:26:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.604 [2024-12-06 16:26:05.274475] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:23.604 [2024-12-06 16:26:05.274565] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.604 [2024-12-06 16:26:05.274595] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:23.604 [2024-12-06 16:26:05.274605] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.604 [2024-12-06 16:26:05.276919] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.604 [2024-12-06 16:26:05.276956] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:23.604 BaseBdev1 00:10:23.604 16:26:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.604 16:26:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:23.604 16:26:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:23.604 16:26:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.604 16:26:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.604 BaseBdev2_malloc 00:10:23.604 16:26:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.604 16:26:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:23.604 16:26:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.604 16:26:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.604 true 00:10:23.604 16:26:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.604 16:26:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:23.604 16:26:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.604 16:26:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.604 [2024-12-06 16:26:05.315089] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:23.604 [2024-12-06 16:26:05.315146] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.604 [2024-12-06 16:26:05.315167] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:23.604 [2024-12-06 16:26:05.315177] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.604 [2024-12-06 16:26:05.317620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.604 [2024-12-06 16:26:05.317664] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:23.604 BaseBdev2 00:10:23.604 16:26:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.604 16:26:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:23.604 16:26:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.604 16:26:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.604 [2024-12-06 16:26:05.327127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:23.604 [2024-12-06 16:26:05.329183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:23.604 [2024-12-06 16:26:05.329408] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:23.604 [2024-12-06 16:26:05.329430] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:23.604 [2024-12-06 16:26:05.329733] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:23.604 [2024-12-06 16:26:05.329889] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:23.604 [2024-12-06 16:26:05.329908] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:10:23.604 [2024-12-06 16:26:05.330070] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:23.604 16:26:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.604 16:26:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:23.604 16:26:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:23.604 16:26:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:23.604 16:26:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:23.604 16:26:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:23.604 16:26:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:23.604 16:26:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.604 16:26:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.604 16:26:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.604 16:26:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.604 16:26:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.604 16:26:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.604 16:26:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:23.604 16:26:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.604 16:26:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.604 16:26:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.604 "name": "raid_bdev1", 00:10:23.604 "uuid": "564a6960-4c64-475e-99c0-86c3451a6d4e", 00:10:23.604 "strip_size_kb": 64, 00:10:23.604 "state": "online", 00:10:23.604 "raid_level": "concat", 00:10:23.604 "superblock": true, 00:10:23.604 "num_base_bdevs": 2, 00:10:23.604 "num_base_bdevs_discovered": 2, 00:10:23.604 "num_base_bdevs_operational": 2, 00:10:23.604 "base_bdevs_list": [ 00:10:23.604 { 00:10:23.604 "name": "BaseBdev1", 00:10:23.604 "uuid": "9cec2ba2-3d50-5f37-a3f1-bfc7d972b617", 00:10:23.604 "is_configured": true, 00:10:23.604 "data_offset": 2048, 00:10:23.604 "data_size": 63488 00:10:23.604 }, 00:10:23.604 { 00:10:23.604 "name": "BaseBdev2", 00:10:23.604 "uuid": "77a111ac-0e97-5f53-8d39-dcb6ef14aa7b", 00:10:23.604 "is_configured": true, 00:10:23.604 "data_offset": 2048, 00:10:23.604 "data_size": 63488 00:10:23.604 } 00:10:23.604 ] 00:10:23.604 }' 00:10:23.604 16:26:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.604 16:26:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.169 16:26:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:24.169 16:26:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:24.169 [2024-12-06 16:26:05.878584] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:25.107 16:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:25.107 16:26:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.107 16:26:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.107 16:26:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.107 16:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:25.107 16:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:25.107 16:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:10:25.107 16:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:25.107 16:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:25.107 16:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:25.107 16:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:25.107 16:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.107 16:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:25.107 16:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.107 16:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.107 16:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.107 16:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.107 16:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.107 16:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:25.107 16:26:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.107 16:26:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.107 16:26:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.107 16:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.107 "name": "raid_bdev1", 00:10:25.107 "uuid": "564a6960-4c64-475e-99c0-86c3451a6d4e", 00:10:25.107 "strip_size_kb": 64, 00:10:25.107 "state": "online", 00:10:25.107 "raid_level": "concat", 00:10:25.107 "superblock": true, 00:10:25.107 "num_base_bdevs": 2, 00:10:25.107 "num_base_bdevs_discovered": 2, 00:10:25.107 "num_base_bdevs_operational": 2, 00:10:25.107 "base_bdevs_list": [ 00:10:25.107 { 00:10:25.107 "name": "BaseBdev1", 00:10:25.107 "uuid": "9cec2ba2-3d50-5f37-a3f1-bfc7d972b617", 00:10:25.107 "is_configured": true, 00:10:25.107 "data_offset": 2048, 00:10:25.107 "data_size": 63488 00:10:25.107 }, 00:10:25.107 { 00:10:25.107 "name": "BaseBdev2", 00:10:25.107 "uuid": "77a111ac-0e97-5f53-8d39-dcb6ef14aa7b", 00:10:25.107 "is_configured": true, 00:10:25.107 "data_offset": 2048, 00:10:25.107 "data_size": 63488 00:10:25.107 } 00:10:25.107 ] 00:10:25.107 }' 00:10:25.107 16:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.107 16:26:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.677 16:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:25.677 16:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.677 16:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.677 [2024-12-06 16:26:07.243705] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:25.677 [2024-12-06 16:26:07.243740] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:25.677 [2024-12-06 16:26:07.246517] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:25.677 [2024-12-06 16:26:07.246603] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:25.677 [2024-12-06 16:26:07.246644] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:25.677 [2024-12-06 16:26:07.246655] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:10:25.677 { 00:10:25.677 "results": [ 00:10:25.677 { 00:10:25.677 "job": "raid_bdev1", 00:10:25.677 "core_mask": "0x1", 00:10:25.677 "workload": "randrw", 00:10:25.677 "percentage": 50, 00:10:25.677 "status": "finished", 00:10:25.677 "queue_depth": 1, 00:10:25.677 "io_size": 131072, 00:10:25.677 "runtime": 1.365839, 00:10:25.677 "iops": 14636.424937346203, 00:10:25.677 "mibps": 1829.5531171682753, 00:10:25.677 "io_failed": 1, 00:10:25.677 "io_timeout": 0, 00:10:25.677 "avg_latency_us": 94.10982139580723, 00:10:25.677 "min_latency_us": 27.276855895196505, 00:10:25.677 "max_latency_us": 1788.646288209607 00:10:25.677 } 00:10:25.677 ], 00:10:25.677 "core_count": 1 00:10:25.677 } 00:10:25.677 16:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.677 16:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74091 00:10:25.677 16:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 74091 ']' 00:10:25.677 16:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 74091 00:10:25.677 16:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:25.677 16:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:25.678 16:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74091 00:10:25.678 killing process with pid 74091 00:10:25.678 16:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:25.678 16:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:25.678 16:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74091' 00:10:25.678 16:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 74091 00:10:25.678 [2024-12-06 16:26:07.292517] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:25.678 16:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 74091 00:10:25.678 [2024-12-06 16:26:07.308728] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:25.938 16:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:25.938 16:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.N9l4V6Qn2t 00:10:25.938 16:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:25.938 16:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:10:25.938 16:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:25.938 16:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:25.938 16:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:25.938 16:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:10:25.938 00:10:25.938 real 0m3.239s 00:10:25.938 user 0m4.136s 00:10:25.938 sys 0m0.554s 00:10:25.938 16:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:25.938 16:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.938 ************************************ 00:10:25.938 END TEST raid_read_error_test 00:10:25.938 ************************************ 00:10:25.938 16:26:07 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:10:25.938 16:26:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:25.938 16:26:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:25.938 16:26:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:25.938 ************************************ 00:10:25.938 START TEST raid_write_error_test 00:10:25.938 ************************************ 00:10:25.938 16:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:10:25.938 16:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:25.938 16:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:25.938 16:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:25.938 16:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:25.938 16:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:25.938 16:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:25.938 16:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:25.938 16:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:25.938 16:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:25.938 16:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:25.938 16:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:25.938 16:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:25.938 16:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:25.938 16:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:25.938 16:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:25.938 16:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:25.938 16:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:25.938 16:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:25.938 16:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:25.938 16:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:25.938 16:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:25.938 16:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:25.938 16:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.4XsiEk2bMN 00:10:25.938 16:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74220 00:10:25.938 16:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74220 00:10:25.938 16:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:25.938 16:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 74220 ']' 00:10:25.938 16:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.938 16:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:25.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.938 16:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.938 16:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:25.939 16:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.939 [2024-12-06 16:26:07.701500] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:10:25.939 [2024-12-06 16:26:07.701645] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74220 ] 00:10:26.198 [2024-12-06 16:26:07.872908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.198 [2024-12-06 16:26:07.901161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.198 [2024-12-06 16:26:07.944412] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:26.198 [2024-12-06 16:26:07.944452] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:26.768 16:26:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:26.768 16:26:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:26.768 16:26:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:26.768 16:26:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:26.768 16:26:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.768 16:26:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.768 BaseBdev1_malloc 00:10:26.768 16:26:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.768 16:26:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:26.768 16:26:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.768 16:26:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.768 true 00:10:26.768 16:26:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.768 16:26:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:26.768 16:26:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.768 16:26:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.768 [2024-12-06 16:26:08.600772] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:26.768 [2024-12-06 16:26:08.600844] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.768 [2024-12-06 16:26:08.600869] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:26.768 [2024-12-06 16:26:08.600880] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.768 [2024-12-06 16:26:08.603254] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.768 [2024-12-06 16:26:08.603288] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:27.028 BaseBdev1 00:10:27.029 16:26:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.029 16:26:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:27.029 16:26:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:27.029 16:26:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.029 16:26:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.029 BaseBdev2_malloc 00:10:27.029 16:26:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.029 16:26:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:27.029 16:26:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.029 16:26:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.029 true 00:10:27.029 16:26:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.029 16:26:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:27.029 16:26:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.029 16:26:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.029 [2024-12-06 16:26:08.641759] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:27.029 [2024-12-06 16:26:08.641825] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:27.029 [2024-12-06 16:26:08.641848] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:27.029 [2024-12-06 16:26:08.641857] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:27.029 [2024-12-06 16:26:08.644080] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:27.029 [2024-12-06 16:26:08.644116] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:27.029 BaseBdev2 00:10:27.029 16:26:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.029 16:26:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:27.029 16:26:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.029 16:26:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.029 [2024-12-06 16:26:08.653785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:27.029 [2024-12-06 16:26:08.655676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:27.029 [2024-12-06 16:26:08.655854] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:27.029 [2024-12-06 16:26:08.655868] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:27.029 [2024-12-06 16:26:08.656161] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:27.029 [2024-12-06 16:26:08.656332] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:27.029 [2024-12-06 16:26:08.656347] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:10:27.029 [2024-12-06 16:26:08.656489] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:27.029 16:26:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.029 16:26:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:27.029 16:26:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:27.029 16:26:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:27.029 16:26:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:27.029 16:26:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.029 16:26:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:27.029 16:26:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.029 16:26:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.029 16:26:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.029 16:26:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.029 16:26:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.029 16:26:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:27.029 16:26:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.029 16:26:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.029 16:26:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.029 16:26:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.029 "name": "raid_bdev1", 00:10:27.029 "uuid": "ced2518e-88b2-4a26-b3b4-302d0be7b72f", 00:10:27.029 "strip_size_kb": 64, 00:10:27.029 "state": "online", 00:10:27.029 "raid_level": "concat", 00:10:27.029 "superblock": true, 00:10:27.029 "num_base_bdevs": 2, 00:10:27.029 "num_base_bdevs_discovered": 2, 00:10:27.029 "num_base_bdevs_operational": 2, 00:10:27.029 "base_bdevs_list": [ 00:10:27.029 { 00:10:27.029 "name": "BaseBdev1", 00:10:27.029 "uuid": "8cf8f9fe-c84b-5882-ad93-5cb49e551e67", 00:10:27.029 "is_configured": true, 00:10:27.029 "data_offset": 2048, 00:10:27.029 "data_size": 63488 00:10:27.029 }, 00:10:27.029 { 00:10:27.029 "name": "BaseBdev2", 00:10:27.029 "uuid": "87bff317-b170-5e06-b694-e6bedeaf6b63", 00:10:27.029 "is_configured": true, 00:10:27.029 "data_offset": 2048, 00:10:27.029 "data_size": 63488 00:10:27.029 } 00:10:27.029 ] 00:10:27.029 }' 00:10:27.029 16:26:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.029 16:26:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.289 16:26:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:27.289 16:26:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:27.548 [2024-12-06 16:26:09.165291] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:28.486 16:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:28.486 16:26:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.486 16:26:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.486 16:26:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.486 16:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:28.486 16:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:28.486 16:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:10:28.486 16:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:28.486 16:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:28.486 16:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:28.486 16:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:28.486 16:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.486 16:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:28.486 16:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.486 16:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.486 16:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.486 16:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.486 16:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.486 16:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:28.486 16:26:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.486 16:26:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.486 16:26:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.486 16:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.486 "name": "raid_bdev1", 00:10:28.486 "uuid": "ced2518e-88b2-4a26-b3b4-302d0be7b72f", 00:10:28.486 "strip_size_kb": 64, 00:10:28.486 "state": "online", 00:10:28.486 "raid_level": "concat", 00:10:28.486 "superblock": true, 00:10:28.486 "num_base_bdevs": 2, 00:10:28.486 "num_base_bdevs_discovered": 2, 00:10:28.486 "num_base_bdevs_operational": 2, 00:10:28.486 "base_bdevs_list": [ 00:10:28.486 { 00:10:28.486 "name": "BaseBdev1", 00:10:28.486 "uuid": "8cf8f9fe-c84b-5882-ad93-5cb49e551e67", 00:10:28.486 "is_configured": true, 00:10:28.486 "data_offset": 2048, 00:10:28.486 "data_size": 63488 00:10:28.486 }, 00:10:28.486 { 00:10:28.486 "name": "BaseBdev2", 00:10:28.486 "uuid": "87bff317-b170-5e06-b694-e6bedeaf6b63", 00:10:28.486 "is_configured": true, 00:10:28.486 "data_offset": 2048, 00:10:28.486 "data_size": 63488 00:10:28.486 } 00:10:28.486 ] 00:10:28.486 }' 00:10:28.486 16:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.486 16:26:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.746 16:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:28.747 16:26:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.747 16:26:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.747 [2024-12-06 16:26:10.476945] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:28.747 [2024-12-06 16:26:10.477028] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:28.747 [2024-12-06 16:26:10.479721] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:28.747 [2024-12-06 16:26:10.479818] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:28.747 [2024-12-06 16:26:10.479873] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:28.747 [2024-12-06 16:26:10.479907] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:10:28.747 { 00:10:28.747 "results": [ 00:10:28.747 { 00:10:28.747 "job": "raid_bdev1", 00:10:28.747 "core_mask": "0x1", 00:10:28.747 "workload": "randrw", 00:10:28.747 "percentage": 50, 00:10:28.747 "status": "finished", 00:10:28.747 "queue_depth": 1, 00:10:28.747 "io_size": 131072, 00:10:28.747 "runtime": 1.312527, 00:10:28.747 "iops": 15777.961139085139, 00:10:28.747 "mibps": 1972.2451423856423, 00:10:28.747 "io_failed": 1, 00:10:28.747 "io_timeout": 0, 00:10:28.747 "avg_latency_us": 87.44391634107102, 00:10:28.747 "min_latency_us": 26.494323144104804, 00:10:28.747 "max_latency_us": 1359.3711790393013 00:10:28.747 } 00:10:28.747 ], 00:10:28.747 "core_count": 1 00:10:28.747 } 00:10:28.747 16:26:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.747 16:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74220 00:10:28.747 16:26:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 74220 ']' 00:10:28.747 16:26:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 74220 00:10:28.747 16:26:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:28.747 16:26:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:28.747 16:26:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74220 00:10:28.747 killing process with pid 74220 00:10:28.747 16:26:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:28.747 16:26:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:28.747 16:26:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74220' 00:10:28.747 16:26:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 74220 00:10:28.747 [2024-12-06 16:26:10.522965] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:28.747 16:26:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 74220 00:10:28.747 [2024-12-06 16:26:10.539018] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:29.008 16:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.4XsiEk2bMN 00:10:29.008 16:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:29.008 16:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:29.008 16:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:10:29.008 ************************************ 00:10:29.008 END TEST raid_write_error_test 00:10:29.008 ************************************ 00:10:29.008 16:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:29.008 16:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:29.008 16:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:29.008 16:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:10:29.008 00:10:29.008 real 0m3.157s 00:10:29.008 user 0m4.002s 00:10:29.008 sys 0m0.508s 00:10:29.008 16:26:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:29.008 16:26:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.008 16:26:10 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:29.008 16:26:10 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:10:29.008 16:26:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:29.008 16:26:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:29.008 16:26:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:29.008 ************************************ 00:10:29.008 START TEST raid_state_function_test 00:10:29.008 ************************************ 00:10:29.008 16:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:10:29.008 16:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:29.008 16:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:29.008 16:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:29.008 16:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:29.008 16:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:29.008 16:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:29.008 16:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:29.008 16:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:29.008 16:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:29.008 16:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:29.008 16:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:29.008 16:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:29.008 16:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:29.008 16:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:29.008 16:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:29.008 16:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:29.008 16:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:29.008 16:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:29.008 16:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:29.008 16:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:29.008 16:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:29.008 16:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:29.008 16:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=74347 00:10:29.008 16:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:29.008 16:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74347' 00:10:29.008 Process raid pid: 74347 00:10:29.008 16:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 74347 00:10:29.008 16:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 74347 ']' 00:10:29.008 16:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.008 16:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:29.008 16:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.008 16:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:29.008 16:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.269 [2024-12-06 16:26:10.917440] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:10:29.269 [2024-12-06 16:26:10.917600] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:29.269 [2024-12-06 16:26:11.092622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.528 [2024-12-06 16:26:11.123646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.528 [2024-12-06 16:26:11.167513] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:29.528 [2024-12-06 16:26:11.167569] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:30.097 16:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:30.097 16:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:30.097 16:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:30.097 16:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.097 16:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.097 [2024-12-06 16:26:11.774655] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:30.097 [2024-12-06 16:26:11.774719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:30.097 [2024-12-06 16:26:11.774729] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:30.097 [2024-12-06 16:26:11.774741] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:30.097 16:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.097 16:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:30.097 16:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.097 16:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.097 16:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:30.097 16:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:30.097 16:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:30.097 16:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.097 16:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.097 16:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.097 16:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.097 16:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.097 16:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.097 16:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.097 16:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.097 16:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.097 16:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.097 "name": "Existed_Raid", 00:10:30.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.097 "strip_size_kb": 0, 00:10:30.097 "state": "configuring", 00:10:30.097 "raid_level": "raid1", 00:10:30.097 "superblock": false, 00:10:30.097 "num_base_bdevs": 2, 00:10:30.097 "num_base_bdevs_discovered": 0, 00:10:30.097 "num_base_bdevs_operational": 2, 00:10:30.097 "base_bdevs_list": [ 00:10:30.097 { 00:10:30.097 "name": "BaseBdev1", 00:10:30.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.097 "is_configured": false, 00:10:30.097 "data_offset": 0, 00:10:30.097 "data_size": 0 00:10:30.097 }, 00:10:30.097 { 00:10:30.097 "name": "BaseBdev2", 00:10:30.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.097 "is_configured": false, 00:10:30.097 "data_offset": 0, 00:10:30.097 "data_size": 0 00:10:30.097 } 00:10:30.097 ] 00:10:30.097 }' 00:10:30.097 16:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.097 16:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.357 16:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:30.357 16:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.357 16:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.357 [2024-12-06 16:26:12.185872] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:30.357 [2024-12-06 16:26:12.185961] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:10:30.357 16:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.357 16:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:30.357 16:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.357 16:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.617 [2024-12-06 16:26:12.197851] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:30.617 [2024-12-06 16:26:12.197936] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:30.617 [2024-12-06 16:26:12.197971] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:30.617 [2024-12-06 16:26:12.197997] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:30.617 16:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.617 16:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:30.617 16:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.617 16:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.617 [2024-12-06 16:26:12.218714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:30.617 BaseBdev1 00:10:30.617 16:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.617 16:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:30.617 16:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:30.617 16:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:30.617 16:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:30.617 16:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:30.617 16:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:30.617 16:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:30.617 16:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.617 16:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.617 16:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.617 16:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:30.617 16:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.617 16:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.617 [ 00:10:30.617 { 00:10:30.617 "name": "BaseBdev1", 00:10:30.617 "aliases": [ 00:10:30.617 "2f638c5c-3479-4a79-a28e-58b03daa2998" 00:10:30.617 ], 00:10:30.617 "product_name": "Malloc disk", 00:10:30.617 "block_size": 512, 00:10:30.617 "num_blocks": 65536, 00:10:30.617 "uuid": "2f638c5c-3479-4a79-a28e-58b03daa2998", 00:10:30.617 "assigned_rate_limits": { 00:10:30.617 "rw_ios_per_sec": 0, 00:10:30.617 "rw_mbytes_per_sec": 0, 00:10:30.617 "r_mbytes_per_sec": 0, 00:10:30.617 "w_mbytes_per_sec": 0 00:10:30.617 }, 00:10:30.617 "claimed": true, 00:10:30.617 "claim_type": "exclusive_write", 00:10:30.617 "zoned": false, 00:10:30.617 "supported_io_types": { 00:10:30.617 "read": true, 00:10:30.617 "write": true, 00:10:30.617 "unmap": true, 00:10:30.617 "flush": true, 00:10:30.617 "reset": true, 00:10:30.617 "nvme_admin": false, 00:10:30.617 "nvme_io": false, 00:10:30.617 "nvme_io_md": false, 00:10:30.617 "write_zeroes": true, 00:10:30.617 "zcopy": true, 00:10:30.617 "get_zone_info": false, 00:10:30.617 "zone_management": false, 00:10:30.617 "zone_append": false, 00:10:30.617 "compare": false, 00:10:30.617 "compare_and_write": false, 00:10:30.617 "abort": true, 00:10:30.617 "seek_hole": false, 00:10:30.617 "seek_data": false, 00:10:30.617 "copy": true, 00:10:30.617 "nvme_iov_md": false 00:10:30.617 }, 00:10:30.617 "memory_domains": [ 00:10:30.617 { 00:10:30.617 "dma_device_id": "system", 00:10:30.617 "dma_device_type": 1 00:10:30.617 }, 00:10:30.617 { 00:10:30.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.618 "dma_device_type": 2 00:10:30.618 } 00:10:30.618 ], 00:10:30.618 "driver_specific": {} 00:10:30.618 } 00:10:30.618 ] 00:10:30.618 16:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.618 16:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:30.618 16:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:30.618 16:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.618 16:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.618 16:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:30.618 16:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:30.618 16:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:30.618 16:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.618 16:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.618 16:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.618 16:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.618 16:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.618 16:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.618 16:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.618 16:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.618 16:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.618 16:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.618 "name": "Existed_Raid", 00:10:30.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.618 "strip_size_kb": 0, 00:10:30.618 "state": "configuring", 00:10:30.618 "raid_level": "raid1", 00:10:30.618 "superblock": false, 00:10:30.618 "num_base_bdevs": 2, 00:10:30.618 "num_base_bdevs_discovered": 1, 00:10:30.618 "num_base_bdevs_operational": 2, 00:10:30.618 "base_bdevs_list": [ 00:10:30.618 { 00:10:30.618 "name": "BaseBdev1", 00:10:30.618 "uuid": "2f638c5c-3479-4a79-a28e-58b03daa2998", 00:10:30.618 "is_configured": true, 00:10:30.618 "data_offset": 0, 00:10:30.618 "data_size": 65536 00:10:30.618 }, 00:10:30.618 { 00:10:30.618 "name": "BaseBdev2", 00:10:30.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.618 "is_configured": false, 00:10:30.618 "data_offset": 0, 00:10:30.618 "data_size": 0 00:10:30.618 } 00:10:30.618 ] 00:10:30.618 }' 00:10:30.618 16:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.618 16:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.878 16:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:30.878 16:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.878 16:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.878 [2024-12-06 16:26:12.674009] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:30.878 [2024-12-06 16:26:12.674064] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:10:30.878 16:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.878 16:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:30.878 16:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.878 16:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.878 [2024-12-06 16:26:12.686003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:30.878 [2024-12-06 16:26:12.688025] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:30.878 [2024-12-06 16:26:12.688068] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:30.878 16:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.878 16:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:30.878 16:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:30.878 16:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:30.878 16:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.878 16:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.878 16:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:30.878 16:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:30.878 16:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:30.878 16:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.878 16:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.878 16:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.878 16:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.878 16:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.878 16:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.878 16:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.878 16:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.878 16:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.138 16:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.138 "name": "Existed_Raid", 00:10:31.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.138 "strip_size_kb": 0, 00:10:31.138 "state": "configuring", 00:10:31.138 "raid_level": "raid1", 00:10:31.138 "superblock": false, 00:10:31.138 "num_base_bdevs": 2, 00:10:31.138 "num_base_bdevs_discovered": 1, 00:10:31.138 "num_base_bdevs_operational": 2, 00:10:31.138 "base_bdevs_list": [ 00:10:31.138 { 00:10:31.138 "name": "BaseBdev1", 00:10:31.138 "uuid": "2f638c5c-3479-4a79-a28e-58b03daa2998", 00:10:31.138 "is_configured": true, 00:10:31.138 "data_offset": 0, 00:10:31.138 "data_size": 65536 00:10:31.138 }, 00:10:31.138 { 00:10:31.138 "name": "BaseBdev2", 00:10:31.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.138 "is_configured": false, 00:10:31.138 "data_offset": 0, 00:10:31.138 "data_size": 0 00:10:31.138 } 00:10:31.138 ] 00:10:31.138 }' 00:10:31.138 16:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.138 16:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.400 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:31.400 16:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.400 16:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.400 [2024-12-06 16:26:13.124336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:31.400 [2024-12-06 16:26:13.124472] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:31.400 [2024-12-06 16:26:13.124502] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:31.400 [2024-12-06 16:26:13.124855] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:10:31.400 [2024-12-06 16:26:13.125067] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:31.400 [2024-12-06 16:26:13.125120] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:10:31.400 [2024-12-06 16:26:13.125392] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:31.400 BaseBdev2 00:10:31.400 16:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.400 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:31.400 16:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:31.400 16:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:31.400 16:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:31.400 16:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:31.400 16:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:31.400 16:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:31.400 16:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.400 16:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.400 16:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.400 16:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:31.400 16:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.400 16:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.400 [ 00:10:31.400 { 00:10:31.400 "name": "BaseBdev2", 00:10:31.400 "aliases": [ 00:10:31.400 "79587d62-84ba-469a-baca-1a8bcce9b90c" 00:10:31.400 ], 00:10:31.400 "product_name": "Malloc disk", 00:10:31.400 "block_size": 512, 00:10:31.400 "num_blocks": 65536, 00:10:31.400 "uuid": "79587d62-84ba-469a-baca-1a8bcce9b90c", 00:10:31.400 "assigned_rate_limits": { 00:10:31.400 "rw_ios_per_sec": 0, 00:10:31.400 "rw_mbytes_per_sec": 0, 00:10:31.400 "r_mbytes_per_sec": 0, 00:10:31.400 "w_mbytes_per_sec": 0 00:10:31.400 }, 00:10:31.400 "claimed": true, 00:10:31.400 "claim_type": "exclusive_write", 00:10:31.400 "zoned": false, 00:10:31.400 "supported_io_types": { 00:10:31.400 "read": true, 00:10:31.400 "write": true, 00:10:31.400 "unmap": true, 00:10:31.400 "flush": true, 00:10:31.400 "reset": true, 00:10:31.400 "nvme_admin": false, 00:10:31.400 "nvme_io": false, 00:10:31.400 "nvme_io_md": false, 00:10:31.400 "write_zeroes": true, 00:10:31.400 "zcopy": true, 00:10:31.400 "get_zone_info": false, 00:10:31.400 "zone_management": false, 00:10:31.400 "zone_append": false, 00:10:31.400 "compare": false, 00:10:31.400 "compare_and_write": false, 00:10:31.400 "abort": true, 00:10:31.400 "seek_hole": false, 00:10:31.400 "seek_data": false, 00:10:31.400 "copy": true, 00:10:31.400 "nvme_iov_md": false 00:10:31.400 }, 00:10:31.400 "memory_domains": [ 00:10:31.400 { 00:10:31.400 "dma_device_id": "system", 00:10:31.400 "dma_device_type": 1 00:10:31.400 }, 00:10:31.400 { 00:10:31.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.400 "dma_device_type": 2 00:10:31.400 } 00:10:31.400 ], 00:10:31.400 "driver_specific": {} 00:10:31.400 } 00:10:31.400 ] 00:10:31.400 16:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.400 16:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:31.400 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:31.400 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:31.400 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:31.400 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.400 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:31.400 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:31.400 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:31.400 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:31.400 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.400 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.400 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.400 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.400 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.400 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.400 16:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.400 16:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.400 16:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.400 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.400 "name": "Existed_Raid", 00:10:31.400 "uuid": "396d86b7-2269-4275-a978-002239126a5a", 00:10:31.400 "strip_size_kb": 0, 00:10:31.400 "state": "online", 00:10:31.400 "raid_level": "raid1", 00:10:31.400 "superblock": false, 00:10:31.400 "num_base_bdevs": 2, 00:10:31.400 "num_base_bdevs_discovered": 2, 00:10:31.400 "num_base_bdevs_operational": 2, 00:10:31.400 "base_bdevs_list": [ 00:10:31.400 { 00:10:31.400 "name": "BaseBdev1", 00:10:31.400 "uuid": "2f638c5c-3479-4a79-a28e-58b03daa2998", 00:10:31.400 "is_configured": true, 00:10:31.400 "data_offset": 0, 00:10:31.400 "data_size": 65536 00:10:31.401 }, 00:10:31.401 { 00:10:31.401 "name": "BaseBdev2", 00:10:31.401 "uuid": "79587d62-84ba-469a-baca-1a8bcce9b90c", 00:10:31.401 "is_configured": true, 00:10:31.401 "data_offset": 0, 00:10:31.401 "data_size": 65536 00:10:31.401 } 00:10:31.401 ] 00:10:31.401 }' 00:10:31.401 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.401 16:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.973 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:31.973 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:31.973 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:31.973 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:31.973 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:31.973 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:31.973 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:31.973 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:31.973 16:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.973 16:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.973 [2024-12-06 16:26:13.615935] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:31.973 16:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.973 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:31.973 "name": "Existed_Raid", 00:10:31.973 "aliases": [ 00:10:31.973 "396d86b7-2269-4275-a978-002239126a5a" 00:10:31.973 ], 00:10:31.973 "product_name": "Raid Volume", 00:10:31.973 "block_size": 512, 00:10:31.973 "num_blocks": 65536, 00:10:31.973 "uuid": "396d86b7-2269-4275-a978-002239126a5a", 00:10:31.973 "assigned_rate_limits": { 00:10:31.973 "rw_ios_per_sec": 0, 00:10:31.973 "rw_mbytes_per_sec": 0, 00:10:31.973 "r_mbytes_per_sec": 0, 00:10:31.973 "w_mbytes_per_sec": 0 00:10:31.973 }, 00:10:31.973 "claimed": false, 00:10:31.973 "zoned": false, 00:10:31.973 "supported_io_types": { 00:10:31.973 "read": true, 00:10:31.973 "write": true, 00:10:31.973 "unmap": false, 00:10:31.973 "flush": false, 00:10:31.973 "reset": true, 00:10:31.973 "nvme_admin": false, 00:10:31.973 "nvme_io": false, 00:10:31.973 "nvme_io_md": false, 00:10:31.973 "write_zeroes": true, 00:10:31.973 "zcopy": false, 00:10:31.973 "get_zone_info": false, 00:10:31.973 "zone_management": false, 00:10:31.973 "zone_append": false, 00:10:31.973 "compare": false, 00:10:31.973 "compare_and_write": false, 00:10:31.973 "abort": false, 00:10:31.973 "seek_hole": false, 00:10:31.973 "seek_data": false, 00:10:31.973 "copy": false, 00:10:31.973 "nvme_iov_md": false 00:10:31.973 }, 00:10:31.973 "memory_domains": [ 00:10:31.973 { 00:10:31.973 "dma_device_id": "system", 00:10:31.973 "dma_device_type": 1 00:10:31.973 }, 00:10:31.973 { 00:10:31.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.973 "dma_device_type": 2 00:10:31.973 }, 00:10:31.973 { 00:10:31.973 "dma_device_id": "system", 00:10:31.973 "dma_device_type": 1 00:10:31.973 }, 00:10:31.973 { 00:10:31.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.973 "dma_device_type": 2 00:10:31.973 } 00:10:31.973 ], 00:10:31.973 "driver_specific": { 00:10:31.973 "raid": { 00:10:31.973 "uuid": "396d86b7-2269-4275-a978-002239126a5a", 00:10:31.973 "strip_size_kb": 0, 00:10:31.973 "state": "online", 00:10:31.973 "raid_level": "raid1", 00:10:31.973 "superblock": false, 00:10:31.973 "num_base_bdevs": 2, 00:10:31.973 "num_base_bdevs_discovered": 2, 00:10:31.973 "num_base_bdevs_operational": 2, 00:10:31.973 "base_bdevs_list": [ 00:10:31.973 { 00:10:31.973 "name": "BaseBdev1", 00:10:31.973 "uuid": "2f638c5c-3479-4a79-a28e-58b03daa2998", 00:10:31.973 "is_configured": true, 00:10:31.973 "data_offset": 0, 00:10:31.973 "data_size": 65536 00:10:31.973 }, 00:10:31.973 { 00:10:31.973 "name": "BaseBdev2", 00:10:31.973 "uuid": "79587d62-84ba-469a-baca-1a8bcce9b90c", 00:10:31.973 "is_configured": true, 00:10:31.973 "data_offset": 0, 00:10:31.973 "data_size": 65536 00:10:31.973 } 00:10:31.974 ] 00:10:31.974 } 00:10:31.974 } 00:10:31.974 }' 00:10:31.974 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:31.974 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:31.974 BaseBdev2' 00:10:31.974 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:31.974 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:31.974 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:31.974 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:31.974 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:31.974 16:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.974 16:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.974 16:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.974 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:31.974 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:31.974 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:31.974 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:31.974 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:31.974 16:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.974 16:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.974 16:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.260 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:32.260 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:32.260 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:32.260 16:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.260 16:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.260 [2024-12-06 16:26:13.819370] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:32.260 16:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.260 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:32.260 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:32.260 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:32.260 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:32.260 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:32.260 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:10:32.260 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.260 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:32.260 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:32.260 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:32.260 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:32.260 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.260 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.260 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.260 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.260 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.260 16:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.260 16:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.260 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.260 16:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.260 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.260 "name": "Existed_Raid", 00:10:32.260 "uuid": "396d86b7-2269-4275-a978-002239126a5a", 00:10:32.260 "strip_size_kb": 0, 00:10:32.260 "state": "online", 00:10:32.260 "raid_level": "raid1", 00:10:32.260 "superblock": false, 00:10:32.260 "num_base_bdevs": 2, 00:10:32.260 "num_base_bdevs_discovered": 1, 00:10:32.260 "num_base_bdevs_operational": 1, 00:10:32.260 "base_bdevs_list": [ 00:10:32.260 { 00:10:32.260 "name": null, 00:10:32.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.260 "is_configured": false, 00:10:32.260 "data_offset": 0, 00:10:32.260 "data_size": 65536 00:10:32.260 }, 00:10:32.260 { 00:10:32.260 "name": "BaseBdev2", 00:10:32.260 "uuid": "79587d62-84ba-469a-baca-1a8bcce9b90c", 00:10:32.260 "is_configured": true, 00:10:32.260 "data_offset": 0, 00:10:32.260 "data_size": 65536 00:10:32.260 } 00:10:32.260 ] 00:10:32.260 }' 00:10:32.260 16:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.260 16:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.520 16:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:32.520 16:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:32.520 16:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.520 16:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.520 16:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.520 16:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:32.520 16:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.520 16:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:32.520 16:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:32.520 16:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:32.520 16:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.520 16:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.779 [2024-12-06 16:26:14.358245] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:32.779 [2024-12-06 16:26:14.358345] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:32.779 [2024-12-06 16:26:14.370341] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:32.779 [2024-12-06 16:26:14.370396] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:32.779 [2024-12-06 16:26:14.370411] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:10:32.779 16:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.779 16:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:32.779 16:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:32.779 16:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.779 16:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.779 16:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:32.779 16:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.779 16:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.779 16:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:32.779 16:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:32.779 16:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:32.779 16:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 74347 00:10:32.779 16:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 74347 ']' 00:10:32.779 16:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 74347 00:10:32.779 16:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:32.779 16:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:32.779 16:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74347 00:10:32.779 16:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:32.779 killing process with pid 74347 00:10:32.779 16:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:32.779 16:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74347' 00:10:32.779 16:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 74347 00:10:32.780 [2024-12-06 16:26:14.476244] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:32.780 16:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 74347 00:10:32.780 [2024-12-06 16:26:14.477312] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:33.039 16:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:33.039 00:10:33.039 real 0m3.866s 00:10:33.039 user 0m6.103s 00:10:33.039 sys 0m0.770s 00:10:33.039 16:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:33.039 16:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.039 ************************************ 00:10:33.039 END TEST raid_state_function_test 00:10:33.039 ************************************ 00:10:33.039 16:26:14 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:10:33.039 16:26:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:33.039 16:26:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:33.039 16:26:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:33.039 ************************************ 00:10:33.039 START TEST raid_state_function_test_sb 00:10:33.039 ************************************ 00:10:33.039 16:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:10:33.039 16:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:33.039 16:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:33.039 16:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:33.039 16:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:33.039 16:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:33.039 16:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:33.039 16:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:33.039 16:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:33.039 16:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:33.039 16:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:33.039 16:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:33.039 16:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:33.039 16:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:33.039 16:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:33.039 16:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:33.039 16:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:33.039 16:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:33.039 16:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:33.039 16:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:33.039 16:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:33.039 16:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:33.039 16:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:33.039 Process raid pid: 74589 00:10:33.039 16:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74589 00:10:33.039 16:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:33.039 16:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74589' 00:10:33.039 16:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74589 00:10:33.039 16:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 74589 ']' 00:10:33.039 16:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.039 16:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:33.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.039 16:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.039 16:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:33.039 16:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.039 [2024-12-06 16:26:14.857495] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:10:33.040 [2024-12-06 16:26:14.857659] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:33.298 [2024-12-06 16:26:15.017958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.298 [2024-12-06 16:26:15.049298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.298 [2024-12-06 16:26:15.095083] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:33.298 [2024-12-06 16:26:15.095214] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:34.234 16:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:34.234 16:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:34.234 16:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:34.234 16:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.234 16:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.234 [2024-12-06 16:26:15.786552] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:34.234 [2024-12-06 16:26:15.786677] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:34.234 [2024-12-06 16:26:15.786694] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:34.234 [2024-12-06 16:26:15.786704] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:34.234 16:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.234 16:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:34.234 16:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.234 16:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.234 16:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:34.234 16:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:34.234 16:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:34.234 16:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.234 16:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.234 16:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.234 16:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.234 16:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.234 16:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.234 16:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.234 16:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.234 16:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.234 16:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.234 "name": "Existed_Raid", 00:10:34.234 "uuid": "69c5cf38-affa-4d00-b579-5ae66d4ed78d", 00:10:34.234 "strip_size_kb": 0, 00:10:34.234 "state": "configuring", 00:10:34.234 "raid_level": "raid1", 00:10:34.234 "superblock": true, 00:10:34.234 "num_base_bdevs": 2, 00:10:34.234 "num_base_bdevs_discovered": 0, 00:10:34.234 "num_base_bdevs_operational": 2, 00:10:34.234 "base_bdevs_list": [ 00:10:34.234 { 00:10:34.234 "name": "BaseBdev1", 00:10:34.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.234 "is_configured": false, 00:10:34.234 "data_offset": 0, 00:10:34.234 "data_size": 0 00:10:34.234 }, 00:10:34.234 { 00:10:34.234 "name": "BaseBdev2", 00:10:34.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.234 "is_configured": false, 00:10:34.234 "data_offset": 0, 00:10:34.234 "data_size": 0 00:10:34.234 } 00:10:34.234 ] 00:10:34.234 }' 00:10:34.234 16:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.234 16:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.493 16:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:34.493 16:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.493 16:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.493 [2024-12-06 16:26:16.277667] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:34.493 [2024-12-06 16:26:16.277779] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:10:34.493 16:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.493 16:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:34.493 16:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.493 16:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.493 [2024-12-06 16:26:16.289651] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:34.493 [2024-12-06 16:26:16.289702] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:34.493 [2024-12-06 16:26:16.289711] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:34.493 [2024-12-06 16:26:16.289737] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:34.493 16:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.493 16:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:34.493 16:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.493 16:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.493 [2024-12-06 16:26:16.311030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:34.493 BaseBdev1 00:10:34.493 16:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.493 16:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:34.493 16:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:34.493 16:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:34.494 16:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:34.494 16:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:34.494 16:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:34.494 16:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:34.494 16:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.494 16:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.494 16:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.494 16:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:34.494 16:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.494 16:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.753 [ 00:10:34.753 { 00:10:34.753 "name": "BaseBdev1", 00:10:34.753 "aliases": [ 00:10:34.753 "5b5bee17-db61-41ef-8001-c000828edef9" 00:10:34.753 ], 00:10:34.753 "product_name": "Malloc disk", 00:10:34.753 "block_size": 512, 00:10:34.753 "num_blocks": 65536, 00:10:34.753 "uuid": "5b5bee17-db61-41ef-8001-c000828edef9", 00:10:34.753 "assigned_rate_limits": { 00:10:34.753 "rw_ios_per_sec": 0, 00:10:34.753 "rw_mbytes_per_sec": 0, 00:10:34.753 "r_mbytes_per_sec": 0, 00:10:34.753 "w_mbytes_per_sec": 0 00:10:34.753 }, 00:10:34.753 "claimed": true, 00:10:34.753 "claim_type": "exclusive_write", 00:10:34.753 "zoned": false, 00:10:34.753 "supported_io_types": { 00:10:34.753 "read": true, 00:10:34.753 "write": true, 00:10:34.753 "unmap": true, 00:10:34.753 "flush": true, 00:10:34.753 "reset": true, 00:10:34.753 "nvme_admin": false, 00:10:34.753 "nvme_io": false, 00:10:34.753 "nvme_io_md": false, 00:10:34.753 "write_zeroes": true, 00:10:34.753 "zcopy": true, 00:10:34.753 "get_zone_info": false, 00:10:34.753 "zone_management": false, 00:10:34.753 "zone_append": false, 00:10:34.753 "compare": false, 00:10:34.753 "compare_and_write": false, 00:10:34.753 "abort": true, 00:10:34.753 "seek_hole": false, 00:10:34.753 "seek_data": false, 00:10:34.753 "copy": true, 00:10:34.753 "nvme_iov_md": false 00:10:34.753 }, 00:10:34.753 "memory_domains": [ 00:10:34.753 { 00:10:34.753 "dma_device_id": "system", 00:10:34.753 "dma_device_type": 1 00:10:34.753 }, 00:10:34.753 { 00:10:34.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.753 "dma_device_type": 2 00:10:34.753 } 00:10:34.753 ], 00:10:34.753 "driver_specific": {} 00:10:34.753 } 00:10:34.753 ] 00:10:34.753 16:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.753 16:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:34.753 16:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:34.753 16:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.753 16:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.753 16:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:34.753 16:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:34.753 16:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:34.753 16:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.753 16:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.753 16:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.753 16:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.753 16:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.753 16:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.753 16:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.753 16:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.753 16:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.753 16:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.753 "name": "Existed_Raid", 00:10:34.754 "uuid": "e00ccf2c-03bc-4427-8d6c-afe819c5ab71", 00:10:34.754 "strip_size_kb": 0, 00:10:34.754 "state": "configuring", 00:10:34.754 "raid_level": "raid1", 00:10:34.754 "superblock": true, 00:10:34.754 "num_base_bdevs": 2, 00:10:34.754 "num_base_bdevs_discovered": 1, 00:10:34.754 "num_base_bdevs_operational": 2, 00:10:34.754 "base_bdevs_list": [ 00:10:34.754 { 00:10:34.754 "name": "BaseBdev1", 00:10:34.754 "uuid": "5b5bee17-db61-41ef-8001-c000828edef9", 00:10:34.754 "is_configured": true, 00:10:34.754 "data_offset": 2048, 00:10:34.754 "data_size": 63488 00:10:34.754 }, 00:10:34.754 { 00:10:34.754 "name": "BaseBdev2", 00:10:34.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.754 "is_configured": false, 00:10:34.754 "data_offset": 0, 00:10:34.754 "data_size": 0 00:10:34.754 } 00:10:34.754 ] 00:10:34.754 }' 00:10:34.754 16:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.754 16:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.012 16:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:35.012 16:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.012 16:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.012 [2024-12-06 16:26:16.770328] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:35.012 [2024-12-06 16:26:16.770471] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:10:35.012 16:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.012 16:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:35.012 16:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.012 16:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.012 [2024-12-06 16:26:16.778329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:35.012 [2024-12-06 16:26:16.780370] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:35.012 [2024-12-06 16:26:16.780467] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:35.012 16:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.012 16:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:35.012 16:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:35.012 16:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:35.012 16:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.012 16:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.012 16:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:35.012 16:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:35.012 16:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:35.012 16:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.012 16:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.012 16:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.012 16:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.012 16:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.012 16:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.012 16:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.012 16:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.012 16:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.012 16:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.012 "name": "Existed_Raid", 00:10:35.012 "uuid": "37d14aa0-c125-47c4-9b7f-79a7793f6d83", 00:10:35.012 "strip_size_kb": 0, 00:10:35.012 "state": "configuring", 00:10:35.012 "raid_level": "raid1", 00:10:35.012 "superblock": true, 00:10:35.012 "num_base_bdevs": 2, 00:10:35.012 "num_base_bdevs_discovered": 1, 00:10:35.012 "num_base_bdevs_operational": 2, 00:10:35.012 "base_bdevs_list": [ 00:10:35.012 { 00:10:35.012 "name": "BaseBdev1", 00:10:35.012 "uuid": "5b5bee17-db61-41ef-8001-c000828edef9", 00:10:35.012 "is_configured": true, 00:10:35.012 "data_offset": 2048, 00:10:35.012 "data_size": 63488 00:10:35.012 }, 00:10:35.012 { 00:10:35.012 "name": "BaseBdev2", 00:10:35.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.012 "is_configured": false, 00:10:35.012 "data_offset": 0, 00:10:35.012 "data_size": 0 00:10:35.012 } 00:10:35.012 ] 00:10:35.012 }' 00:10:35.012 16:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.012 16:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.582 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:35.582 16:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.582 16:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.582 BaseBdev2 00:10:35.582 [2024-12-06 16:26:17.248778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:35.582 [2024-12-06 16:26:17.248983] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:35.582 [2024-12-06 16:26:17.249005] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:35.582 [2024-12-06 16:26:17.249296] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:10:35.582 [2024-12-06 16:26:17.249457] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:35.582 [2024-12-06 16:26:17.249519] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:10:35.582 [2024-12-06 16:26:17.249654] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:35.582 16:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.582 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:35.582 16:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:35.582 16:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:35.582 16:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:35.582 16:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:35.582 16:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:35.582 16:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:35.582 16:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.582 16:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.582 16:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.582 16:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:35.582 16:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.582 16:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.582 [ 00:10:35.582 { 00:10:35.582 "name": "BaseBdev2", 00:10:35.582 "aliases": [ 00:10:35.582 "36019050-3675-4498-ae1f-fbd664dfc9d0" 00:10:35.582 ], 00:10:35.582 "product_name": "Malloc disk", 00:10:35.582 "block_size": 512, 00:10:35.582 "num_blocks": 65536, 00:10:35.582 "uuid": "36019050-3675-4498-ae1f-fbd664dfc9d0", 00:10:35.582 "assigned_rate_limits": { 00:10:35.582 "rw_ios_per_sec": 0, 00:10:35.582 "rw_mbytes_per_sec": 0, 00:10:35.582 "r_mbytes_per_sec": 0, 00:10:35.582 "w_mbytes_per_sec": 0 00:10:35.582 }, 00:10:35.582 "claimed": true, 00:10:35.582 "claim_type": "exclusive_write", 00:10:35.582 "zoned": false, 00:10:35.582 "supported_io_types": { 00:10:35.582 "read": true, 00:10:35.582 "write": true, 00:10:35.582 "unmap": true, 00:10:35.582 "flush": true, 00:10:35.582 "reset": true, 00:10:35.582 "nvme_admin": false, 00:10:35.582 "nvme_io": false, 00:10:35.582 "nvme_io_md": false, 00:10:35.582 "write_zeroes": true, 00:10:35.582 "zcopy": true, 00:10:35.582 "get_zone_info": false, 00:10:35.582 "zone_management": false, 00:10:35.582 "zone_append": false, 00:10:35.582 "compare": false, 00:10:35.582 "compare_and_write": false, 00:10:35.582 "abort": true, 00:10:35.582 "seek_hole": false, 00:10:35.582 "seek_data": false, 00:10:35.582 "copy": true, 00:10:35.582 "nvme_iov_md": false 00:10:35.582 }, 00:10:35.582 "memory_domains": [ 00:10:35.582 { 00:10:35.582 "dma_device_id": "system", 00:10:35.582 "dma_device_type": 1 00:10:35.582 }, 00:10:35.582 { 00:10:35.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.582 "dma_device_type": 2 00:10:35.582 } 00:10:35.582 ], 00:10:35.582 "driver_specific": {} 00:10:35.582 } 00:10:35.582 ] 00:10:35.582 16:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.582 16:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:35.582 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:35.582 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:35.582 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:35.582 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.582 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:35.582 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:35.582 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:35.582 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:35.582 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.582 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.582 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.582 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.582 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.582 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.582 16:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.582 16:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.582 16:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.582 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.582 "name": "Existed_Raid", 00:10:35.582 "uuid": "37d14aa0-c125-47c4-9b7f-79a7793f6d83", 00:10:35.582 "strip_size_kb": 0, 00:10:35.582 "state": "online", 00:10:35.582 "raid_level": "raid1", 00:10:35.582 "superblock": true, 00:10:35.582 "num_base_bdevs": 2, 00:10:35.582 "num_base_bdevs_discovered": 2, 00:10:35.582 "num_base_bdevs_operational": 2, 00:10:35.582 "base_bdevs_list": [ 00:10:35.582 { 00:10:35.582 "name": "BaseBdev1", 00:10:35.582 "uuid": "5b5bee17-db61-41ef-8001-c000828edef9", 00:10:35.582 "is_configured": true, 00:10:35.582 "data_offset": 2048, 00:10:35.582 "data_size": 63488 00:10:35.582 }, 00:10:35.582 { 00:10:35.582 "name": "BaseBdev2", 00:10:35.582 "uuid": "36019050-3675-4498-ae1f-fbd664dfc9d0", 00:10:35.582 "is_configured": true, 00:10:35.582 "data_offset": 2048, 00:10:35.582 "data_size": 63488 00:10:35.582 } 00:10:35.582 ] 00:10:35.582 }' 00:10:35.582 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.582 16:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.158 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:36.158 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:36.158 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:36.158 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:36.158 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:36.158 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:36.158 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:36.158 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:36.158 16:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.158 16:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.158 [2024-12-06 16:26:17.748461] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:36.158 16:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.158 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:36.158 "name": "Existed_Raid", 00:10:36.158 "aliases": [ 00:10:36.158 "37d14aa0-c125-47c4-9b7f-79a7793f6d83" 00:10:36.158 ], 00:10:36.158 "product_name": "Raid Volume", 00:10:36.158 "block_size": 512, 00:10:36.158 "num_blocks": 63488, 00:10:36.158 "uuid": "37d14aa0-c125-47c4-9b7f-79a7793f6d83", 00:10:36.158 "assigned_rate_limits": { 00:10:36.158 "rw_ios_per_sec": 0, 00:10:36.158 "rw_mbytes_per_sec": 0, 00:10:36.158 "r_mbytes_per_sec": 0, 00:10:36.158 "w_mbytes_per_sec": 0 00:10:36.158 }, 00:10:36.158 "claimed": false, 00:10:36.158 "zoned": false, 00:10:36.158 "supported_io_types": { 00:10:36.158 "read": true, 00:10:36.158 "write": true, 00:10:36.158 "unmap": false, 00:10:36.158 "flush": false, 00:10:36.158 "reset": true, 00:10:36.158 "nvme_admin": false, 00:10:36.158 "nvme_io": false, 00:10:36.158 "nvme_io_md": false, 00:10:36.158 "write_zeroes": true, 00:10:36.158 "zcopy": false, 00:10:36.159 "get_zone_info": false, 00:10:36.159 "zone_management": false, 00:10:36.159 "zone_append": false, 00:10:36.159 "compare": false, 00:10:36.159 "compare_and_write": false, 00:10:36.159 "abort": false, 00:10:36.159 "seek_hole": false, 00:10:36.159 "seek_data": false, 00:10:36.159 "copy": false, 00:10:36.159 "nvme_iov_md": false 00:10:36.159 }, 00:10:36.159 "memory_domains": [ 00:10:36.159 { 00:10:36.159 "dma_device_id": "system", 00:10:36.159 "dma_device_type": 1 00:10:36.159 }, 00:10:36.159 { 00:10:36.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.159 "dma_device_type": 2 00:10:36.159 }, 00:10:36.159 { 00:10:36.159 "dma_device_id": "system", 00:10:36.159 "dma_device_type": 1 00:10:36.159 }, 00:10:36.159 { 00:10:36.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.159 "dma_device_type": 2 00:10:36.159 } 00:10:36.159 ], 00:10:36.159 "driver_specific": { 00:10:36.159 "raid": { 00:10:36.159 "uuid": "37d14aa0-c125-47c4-9b7f-79a7793f6d83", 00:10:36.159 "strip_size_kb": 0, 00:10:36.159 "state": "online", 00:10:36.159 "raid_level": "raid1", 00:10:36.159 "superblock": true, 00:10:36.159 "num_base_bdevs": 2, 00:10:36.159 "num_base_bdevs_discovered": 2, 00:10:36.159 "num_base_bdevs_operational": 2, 00:10:36.159 "base_bdevs_list": [ 00:10:36.159 { 00:10:36.159 "name": "BaseBdev1", 00:10:36.159 "uuid": "5b5bee17-db61-41ef-8001-c000828edef9", 00:10:36.159 "is_configured": true, 00:10:36.159 "data_offset": 2048, 00:10:36.159 "data_size": 63488 00:10:36.159 }, 00:10:36.159 { 00:10:36.159 "name": "BaseBdev2", 00:10:36.159 "uuid": "36019050-3675-4498-ae1f-fbd664dfc9d0", 00:10:36.159 "is_configured": true, 00:10:36.159 "data_offset": 2048, 00:10:36.159 "data_size": 63488 00:10:36.159 } 00:10:36.159 ] 00:10:36.159 } 00:10:36.159 } 00:10:36.159 }' 00:10:36.159 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:36.159 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:36.159 BaseBdev2' 00:10:36.159 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.159 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:36.159 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.159 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:36.159 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.159 16:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.159 16:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.159 16:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.159 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.159 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.159 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.159 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:36.159 16:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.159 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.159 16:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.159 16:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.159 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.159 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.159 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:36.159 16:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.159 16:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.159 [2024-12-06 16:26:17.975890] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:36.159 16:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.159 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:36.159 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:36.159 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:36.159 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:36.159 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:36.159 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:10:36.159 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.159 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:36.159 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:36.159 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:36.159 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:36.159 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.159 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.159 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.159 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.418 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.418 16:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.418 16:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.418 16:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.418 16:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.418 16:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.418 "name": "Existed_Raid", 00:10:36.418 "uuid": "37d14aa0-c125-47c4-9b7f-79a7793f6d83", 00:10:36.418 "strip_size_kb": 0, 00:10:36.418 "state": "online", 00:10:36.418 "raid_level": "raid1", 00:10:36.418 "superblock": true, 00:10:36.418 "num_base_bdevs": 2, 00:10:36.418 "num_base_bdevs_discovered": 1, 00:10:36.418 "num_base_bdevs_operational": 1, 00:10:36.418 "base_bdevs_list": [ 00:10:36.418 { 00:10:36.418 "name": null, 00:10:36.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.418 "is_configured": false, 00:10:36.418 "data_offset": 0, 00:10:36.418 "data_size": 63488 00:10:36.418 }, 00:10:36.418 { 00:10:36.418 "name": "BaseBdev2", 00:10:36.418 "uuid": "36019050-3675-4498-ae1f-fbd664dfc9d0", 00:10:36.418 "is_configured": true, 00:10:36.418 "data_offset": 2048, 00:10:36.418 "data_size": 63488 00:10:36.418 } 00:10:36.418 ] 00:10:36.418 }' 00:10:36.418 16:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.418 16:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.678 16:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:36.678 16:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:36.678 16:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:36.678 16:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.678 16:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.678 16:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.678 16:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.678 16:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:36.678 16:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:36.678 16:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:36.678 16:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.678 16:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.678 [2024-12-06 16:26:18.482814] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:36.678 [2024-12-06 16:26:18.482939] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:36.678 [2024-12-06 16:26:18.494821] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:36.678 [2024-12-06 16:26:18.494881] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:36.678 [2024-12-06 16:26:18.494901] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:10:36.678 16:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.679 16:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:36.679 16:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:36.679 16:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.679 16:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:36.679 16:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.679 16:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.679 16:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.939 16:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:36.939 16:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:36.939 16:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:36.939 16:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74589 00:10:36.939 16:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 74589 ']' 00:10:36.939 16:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 74589 00:10:36.939 16:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:36.939 16:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:36.939 16:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74589 00:10:36.939 16:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:36.939 16:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:36.939 16:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74589' 00:10:36.939 killing process with pid 74589 00:10:36.939 16:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 74589 00:10:36.939 [2024-12-06 16:26:18.595672] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:36.939 16:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 74589 00:10:36.939 [2024-12-06 16:26:18.596799] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:37.199 16:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:37.199 00:10:37.199 real 0m4.065s 00:10:37.199 user 0m6.470s 00:10:37.199 sys 0m0.821s 00:10:37.199 16:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:37.199 ************************************ 00:10:37.199 END TEST raid_state_function_test_sb 00:10:37.199 ************************************ 00:10:37.199 16:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.199 16:26:18 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:10:37.199 16:26:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:37.199 16:26:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:37.199 16:26:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:37.199 ************************************ 00:10:37.199 START TEST raid_superblock_test 00:10:37.199 ************************************ 00:10:37.199 16:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:10:37.199 16:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:37.199 16:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:10:37.199 16:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:37.199 16:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:37.199 16:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:37.199 16:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:37.199 16:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:37.199 16:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:37.199 16:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:37.199 16:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:37.199 16:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:37.199 16:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:37.200 16:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:37.200 16:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:37.200 16:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:37.200 16:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74830 00:10:37.200 16:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:37.200 16:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74830 00:10:37.200 16:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74830 ']' 00:10:37.200 16:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.200 16:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:37.200 16:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.200 16:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:37.200 16:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.200 [2024-12-06 16:26:18.982696] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:10:37.200 [2024-12-06 16:26:18.982928] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74830 ] 00:10:37.459 [2024-12-06 16:26:19.157299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.459 [2024-12-06 16:26:19.184099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.459 [2024-12-06 16:26:19.228210] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:37.459 [2024-12-06 16:26:19.228345] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:38.028 16:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:38.028 16:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:38.028 16:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:38.028 16:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:38.028 16:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:38.028 16:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:38.028 16:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:38.028 16:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:38.028 16:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:38.028 16:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:38.028 16:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:38.028 16:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.028 16:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.029 malloc1 00:10:38.029 16:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.029 16:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:38.029 16:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.029 16:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.029 [2024-12-06 16:26:19.849106] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:38.029 [2024-12-06 16:26:19.849304] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.029 [2024-12-06 16:26:19.849350] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:38.029 [2024-12-06 16:26:19.849389] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.029 [2024-12-06 16:26:19.851577] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.029 [2024-12-06 16:26:19.851655] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:38.029 pt1 00:10:38.029 16:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.029 16:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:38.029 16:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:38.029 16:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:38.029 16:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:38.029 16:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:38.029 16:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:38.029 16:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:38.029 16:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:38.029 16:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:38.029 16:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.029 16:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.288 malloc2 00:10:38.288 16:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.288 16:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:38.288 16:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.288 16:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.288 [2024-12-06 16:26:19.881991] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:38.288 [2024-12-06 16:26:19.882054] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.288 [2024-12-06 16:26:19.882070] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:38.288 [2024-12-06 16:26:19.882079] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.288 [2024-12-06 16:26:19.884300] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.288 [2024-12-06 16:26:19.884340] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:38.288 pt2 00:10:38.288 16:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.288 16:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:38.288 16:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:38.288 16:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:10:38.288 16:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.288 16:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.288 [2024-12-06 16:26:19.894000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:38.288 [2024-12-06 16:26:19.895834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:38.288 [2024-12-06 16:26:19.896000] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:10:38.288 [2024-12-06 16:26:19.896027] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:38.288 [2024-12-06 16:26:19.896305] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:10:38.288 [2024-12-06 16:26:19.896446] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:10:38.288 [2024-12-06 16:26:19.896457] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:10:38.288 [2024-12-06 16:26:19.896583] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:38.288 16:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.288 16:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:38.288 16:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:38.288 16:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:38.288 16:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:38.288 16:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:38.288 16:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:38.288 16:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.288 16:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.288 16:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.288 16:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.288 16:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.288 16:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:38.288 16:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.288 16:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.288 16:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.288 16:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.288 "name": "raid_bdev1", 00:10:38.288 "uuid": "6187746e-c131-4473-b74a-59ec1f0d267a", 00:10:38.288 "strip_size_kb": 0, 00:10:38.288 "state": "online", 00:10:38.288 "raid_level": "raid1", 00:10:38.288 "superblock": true, 00:10:38.288 "num_base_bdevs": 2, 00:10:38.288 "num_base_bdevs_discovered": 2, 00:10:38.288 "num_base_bdevs_operational": 2, 00:10:38.288 "base_bdevs_list": [ 00:10:38.288 { 00:10:38.288 "name": "pt1", 00:10:38.288 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:38.288 "is_configured": true, 00:10:38.288 "data_offset": 2048, 00:10:38.288 "data_size": 63488 00:10:38.288 }, 00:10:38.288 { 00:10:38.288 "name": "pt2", 00:10:38.288 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:38.288 "is_configured": true, 00:10:38.288 "data_offset": 2048, 00:10:38.288 "data_size": 63488 00:10:38.288 } 00:10:38.288 ] 00:10:38.288 }' 00:10:38.288 16:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.288 16:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.547 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:38.547 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:38.547 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:38.547 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:38.547 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:38.547 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:38.547 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:38.547 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:38.547 16:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.547 16:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.547 [2024-12-06 16:26:20.337584] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:38.547 16:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.547 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:38.547 "name": "raid_bdev1", 00:10:38.547 "aliases": [ 00:10:38.547 "6187746e-c131-4473-b74a-59ec1f0d267a" 00:10:38.547 ], 00:10:38.547 "product_name": "Raid Volume", 00:10:38.547 "block_size": 512, 00:10:38.547 "num_blocks": 63488, 00:10:38.547 "uuid": "6187746e-c131-4473-b74a-59ec1f0d267a", 00:10:38.547 "assigned_rate_limits": { 00:10:38.547 "rw_ios_per_sec": 0, 00:10:38.547 "rw_mbytes_per_sec": 0, 00:10:38.547 "r_mbytes_per_sec": 0, 00:10:38.547 "w_mbytes_per_sec": 0 00:10:38.547 }, 00:10:38.547 "claimed": false, 00:10:38.547 "zoned": false, 00:10:38.547 "supported_io_types": { 00:10:38.547 "read": true, 00:10:38.547 "write": true, 00:10:38.547 "unmap": false, 00:10:38.547 "flush": false, 00:10:38.547 "reset": true, 00:10:38.547 "nvme_admin": false, 00:10:38.547 "nvme_io": false, 00:10:38.547 "nvme_io_md": false, 00:10:38.547 "write_zeroes": true, 00:10:38.547 "zcopy": false, 00:10:38.547 "get_zone_info": false, 00:10:38.547 "zone_management": false, 00:10:38.547 "zone_append": false, 00:10:38.547 "compare": false, 00:10:38.547 "compare_and_write": false, 00:10:38.547 "abort": false, 00:10:38.547 "seek_hole": false, 00:10:38.547 "seek_data": false, 00:10:38.547 "copy": false, 00:10:38.547 "nvme_iov_md": false 00:10:38.547 }, 00:10:38.547 "memory_domains": [ 00:10:38.547 { 00:10:38.547 "dma_device_id": "system", 00:10:38.547 "dma_device_type": 1 00:10:38.547 }, 00:10:38.547 { 00:10:38.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.547 "dma_device_type": 2 00:10:38.547 }, 00:10:38.547 { 00:10:38.547 "dma_device_id": "system", 00:10:38.547 "dma_device_type": 1 00:10:38.547 }, 00:10:38.547 { 00:10:38.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.547 "dma_device_type": 2 00:10:38.547 } 00:10:38.547 ], 00:10:38.547 "driver_specific": { 00:10:38.547 "raid": { 00:10:38.547 "uuid": "6187746e-c131-4473-b74a-59ec1f0d267a", 00:10:38.547 "strip_size_kb": 0, 00:10:38.547 "state": "online", 00:10:38.547 "raid_level": "raid1", 00:10:38.547 "superblock": true, 00:10:38.547 "num_base_bdevs": 2, 00:10:38.547 "num_base_bdevs_discovered": 2, 00:10:38.547 "num_base_bdevs_operational": 2, 00:10:38.547 "base_bdevs_list": [ 00:10:38.547 { 00:10:38.547 "name": "pt1", 00:10:38.547 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:38.547 "is_configured": true, 00:10:38.547 "data_offset": 2048, 00:10:38.547 "data_size": 63488 00:10:38.547 }, 00:10:38.547 { 00:10:38.547 "name": "pt2", 00:10:38.547 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:38.547 "is_configured": true, 00:10:38.547 "data_offset": 2048, 00:10:38.547 "data_size": 63488 00:10:38.547 } 00:10:38.547 ] 00:10:38.547 } 00:10:38.547 } 00:10:38.547 }' 00:10:38.547 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:38.807 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:38.807 pt2' 00:10:38.807 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.807 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:38.807 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.807 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.807 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:38.807 16:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.807 16:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.807 16:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.807 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.807 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.807 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.807 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.807 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:38.807 16:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.807 16:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.807 16:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.807 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.807 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.807 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:38.807 16:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.807 16:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.807 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:38.807 [2024-12-06 16:26:20.569080] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:38.807 16:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.807 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6187746e-c131-4473-b74a-59ec1f0d267a 00:10:38.807 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 6187746e-c131-4473-b74a-59ec1f0d267a ']' 00:10:38.807 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:38.807 16:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.807 16:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.807 [2024-12-06 16:26:20.612776] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:38.807 [2024-12-06 16:26:20.612812] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:38.807 [2024-12-06 16:26:20.612893] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:38.807 [2024-12-06 16:26:20.612964] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:38.807 [2024-12-06 16:26:20.612976] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:10:38.807 16:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.807 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.807 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:38.807 16:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.807 16:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.807 16:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.067 [2024-12-06 16:26:20.740569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:39.067 [2024-12-06 16:26:20.742440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:39.067 [2024-12-06 16:26:20.742549] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:39.067 [2024-12-06 16:26:20.742595] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:39.067 [2024-12-06 16:26:20.742611] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:39.067 [2024-12-06 16:26:20.742621] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:10:39.067 request: 00:10:39.067 { 00:10:39.067 "name": "raid_bdev1", 00:10:39.067 "raid_level": "raid1", 00:10:39.067 "base_bdevs": [ 00:10:39.067 "malloc1", 00:10:39.067 "malloc2" 00:10:39.067 ], 00:10:39.067 "superblock": false, 00:10:39.067 "method": "bdev_raid_create", 00:10:39.067 "req_id": 1 00:10:39.067 } 00:10:39.067 Got JSON-RPC error response 00:10:39.067 response: 00:10:39.067 { 00:10:39.067 "code": -17, 00:10:39.067 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:39.067 } 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.067 [2024-12-06 16:26:20.804403] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:39.067 [2024-12-06 16:26:20.804518] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:39.067 [2024-12-06 16:26:20.804555] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:39.067 [2024-12-06 16:26:20.804611] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:39.067 [2024-12-06 16:26:20.806826] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:39.067 [2024-12-06 16:26:20.806897] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:39.067 [2024-12-06 16:26:20.807007] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:39.067 [2024-12-06 16:26:20.807068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:39.067 pt1 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.067 "name": "raid_bdev1", 00:10:39.067 "uuid": "6187746e-c131-4473-b74a-59ec1f0d267a", 00:10:39.067 "strip_size_kb": 0, 00:10:39.067 "state": "configuring", 00:10:39.067 "raid_level": "raid1", 00:10:39.067 "superblock": true, 00:10:39.067 "num_base_bdevs": 2, 00:10:39.067 "num_base_bdevs_discovered": 1, 00:10:39.067 "num_base_bdevs_operational": 2, 00:10:39.067 "base_bdevs_list": [ 00:10:39.067 { 00:10:39.067 "name": "pt1", 00:10:39.067 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:39.067 "is_configured": true, 00:10:39.067 "data_offset": 2048, 00:10:39.067 "data_size": 63488 00:10:39.067 }, 00:10:39.067 { 00:10:39.067 "name": null, 00:10:39.067 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:39.067 "is_configured": false, 00:10:39.067 "data_offset": 2048, 00:10:39.067 "data_size": 63488 00:10:39.067 } 00:10:39.067 ] 00:10:39.067 }' 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.067 16:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.634 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:10:39.634 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:39.634 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:39.634 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:39.634 16:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.634 16:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.634 [2024-12-06 16:26:21.172030] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:39.634 [2024-12-06 16:26:21.172115] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:39.634 [2024-12-06 16:26:21.172142] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:10:39.634 [2024-12-06 16:26:21.172153] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:39.634 [2024-12-06 16:26:21.172600] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:39.634 [2024-12-06 16:26:21.172633] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:39.634 [2024-12-06 16:26:21.172715] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:39.634 [2024-12-06 16:26:21.172740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:39.634 [2024-12-06 16:26:21.172844] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:39.634 [2024-12-06 16:26:21.172860] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:39.634 [2024-12-06 16:26:21.173125] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:10:39.634 [2024-12-06 16:26:21.173274] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:39.634 [2024-12-06 16:26:21.173290] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:10:39.634 [2024-12-06 16:26:21.173405] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:39.634 pt2 00:10:39.634 16:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.634 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:39.634 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:39.634 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:39.634 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:39.634 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:39.634 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:39.634 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:39.634 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:39.634 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.634 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.634 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.634 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.634 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.634 16:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.634 16:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.634 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:39.634 16:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.634 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.634 "name": "raid_bdev1", 00:10:39.634 "uuid": "6187746e-c131-4473-b74a-59ec1f0d267a", 00:10:39.634 "strip_size_kb": 0, 00:10:39.635 "state": "online", 00:10:39.635 "raid_level": "raid1", 00:10:39.635 "superblock": true, 00:10:39.635 "num_base_bdevs": 2, 00:10:39.635 "num_base_bdevs_discovered": 2, 00:10:39.635 "num_base_bdevs_operational": 2, 00:10:39.635 "base_bdevs_list": [ 00:10:39.635 { 00:10:39.635 "name": "pt1", 00:10:39.635 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:39.635 "is_configured": true, 00:10:39.635 "data_offset": 2048, 00:10:39.635 "data_size": 63488 00:10:39.635 }, 00:10:39.635 { 00:10:39.635 "name": "pt2", 00:10:39.635 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:39.635 "is_configured": true, 00:10:39.635 "data_offset": 2048, 00:10:39.635 "data_size": 63488 00:10:39.635 } 00:10:39.635 ] 00:10:39.635 }' 00:10:39.635 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.635 16:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.893 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:39.893 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:39.893 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:39.893 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:39.893 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:39.893 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:39.893 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:39.893 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:39.893 16:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.893 16:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.893 [2024-12-06 16:26:21.547630] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:39.893 16:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.893 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:39.893 "name": "raid_bdev1", 00:10:39.893 "aliases": [ 00:10:39.893 "6187746e-c131-4473-b74a-59ec1f0d267a" 00:10:39.893 ], 00:10:39.893 "product_name": "Raid Volume", 00:10:39.893 "block_size": 512, 00:10:39.893 "num_blocks": 63488, 00:10:39.893 "uuid": "6187746e-c131-4473-b74a-59ec1f0d267a", 00:10:39.893 "assigned_rate_limits": { 00:10:39.893 "rw_ios_per_sec": 0, 00:10:39.893 "rw_mbytes_per_sec": 0, 00:10:39.893 "r_mbytes_per_sec": 0, 00:10:39.893 "w_mbytes_per_sec": 0 00:10:39.893 }, 00:10:39.893 "claimed": false, 00:10:39.893 "zoned": false, 00:10:39.893 "supported_io_types": { 00:10:39.893 "read": true, 00:10:39.893 "write": true, 00:10:39.893 "unmap": false, 00:10:39.893 "flush": false, 00:10:39.893 "reset": true, 00:10:39.893 "nvme_admin": false, 00:10:39.893 "nvme_io": false, 00:10:39.893 "nvme_io_md": false, 00:10:39.893 "write_zeroes": true, 00:10:39.893 "zcopy": false, 00:10:39.893 "get_zone_info": false, 00:10:39.893 "zone_management": false, 00:10:39.893 "zone_append": false, 00:10:39.893 "compare": false, 00:10:39.893 "compare_and_write": false, 00:10:39.893 "abort": false, 00:10:39.893 "seek_hole": false, 00:10:39.893 "seek_data": false, 00:10:39.893 "copy": false, 00:10:39.893 "nvme_iov_md": false 00:10:39.893 }, 00:10:39.893 "memory_domains": [ 00:10:39.893 { 00:10:39.893 "dma_device_id": "system", 00:10:39.893 "dma_device_type": 1 00:10:39.893 }, 00:10:39.893 { 00:10:39.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.893 "dma_device_type": 2 00:10:39.893 }, 00:10:39.893 { 00:10:39.893 "dma_device_id": "system", 00:10:39.893 "dma_device_type": 1 00:10:39.893 }, 00:10:39.893 { 00:10:39.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.893 "dma_device_type": 2 00:10:39.893 } 00:10:39.893 ], 00:10:39.893 "driver_specific": { 00:10:39.893 "raid": { 00:10:39.893 "uuid": "6187746e-c131-4473-b74a-59ec1f0d267a", 00:10:39.893 "strip_size_kb": 0, 00:10:39.893 "state": "online", 00:10:39.893 "raid_level": "raid1", 00:10:39.893 "superblock": true, 00:10:39.893 "num_base_bdevs": 2, 00:10:39.893 "num_base_bdevs_discovered": 2, 00:10:39.893 "num_base_bdevs_operational": 2, 00:10:39.893 "base_bdevs_list": [ 00:10:39.893 { 00:10:39.893 "name": "pt1", 00:10:39.893 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:39.893 "is_configured": true, 00:10:39.893 "data_offset": 2048, 00:10:39.893 "data_size": 63488 00:10:39.893 }, 00:10:39.893 { 00:10:39.893 "name": "pt2", 00:10:39.893 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:39.893 "is_configured": true, 00:10:39.893 "data_offset": 2048, 00:10:39.893 "data_size": 63488 00:10:39.893 } 00:10:39.893 ] 00:10:39.893 } 00:10:39.893 } 00:10:39.893 }' 00:10:39.893 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:39.893 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:39.893 pt2' 00:10:39.893 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.893 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:39.893 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.893 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:39.893 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.893 16:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.893 16:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.893 16:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.152 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.152 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.152 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.152 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:40.152 16:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.152 16:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.152 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.152 16:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.152 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.152 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.152 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:40.152 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:40.152 16:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.152 16:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.152 [2024-12-06 16:26:21.795141] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:40.152 16:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.152 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 6187746e-c131-4473-b74a-59ec1f0d267a '!=' 6187746e-c131-4473-b74a-59ec1f0d267a ']' 00:10:40.152 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:40.152 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:40.152 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:40.152 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:40.152 16:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.152 16:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.152 [2024-12-06 16:26:21.822875] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:40.152 16:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.152 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:40.152 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:40.152 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:40.152 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:40.152 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:40.152 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:40.152 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.152 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.152 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.152 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.152 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.152 16:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.152 16:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.152 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:40.152 16:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.152 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.152 "name": "raid_bdev1", 00:10:40.152 "uuid": "6187746e-c131-4473-b74a-59ec1f0d267a", 00:10:40.152 "strip_size_kb": 0, 00:10:40.152 "state": "online", 00:10:40.152 "raid_level": "raid1", 00:10:40.152 "superblock": true, 00:10:40.152 "num_base_bdevs": 2, 00:10:40.152 "num_base_bdevs_discovered": 1, 00:10:40.152 "num_base_bdevs_operational": 1, 00:10:40.152 "base_bdevs_list": [ 00:10:40.152 { 00:10:40.152 "name": null, 00:10:40.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.152 "is_configured": false, 00:10:40.152 "data_offset": 0, 00:10:40.152 "data_size": 63488 00:10:40.152 }, 00:10:40.152 { 00:10:40.152 "name": "pt2", 00:10:40.152 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:40.152 "is_configured": true, 00:10:40.152 "data_offset": 2048, 00:10:40.152 "data_size": 63488 00:10:40.152 } 00:10:40.152 ] 00:10:40.152 }' 00:10:40.152 16:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.152 16:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.414 16:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:40.414 16:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.414 16:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.414 [2024-12-06 16:26:22.238142] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:40.414 [2024-12-06 16:26:22.238180] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:40.414 [2024-12-06 16:26:22.238271] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:40.414 [2024-12-06 16:26:22.238336] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:40.414 [2024-12-06 16:26:22.238354] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:10:40.415 16:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.415 16:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.415 16:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.415 16:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:40.415 16:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.682 16:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.682 16:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:40.682 16:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:40.682 16:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:40.682 16:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:40.682 16:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:40.682 16:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.682 16:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.682 16:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.682 16:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:40.682 16:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:40.682 16:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:40.682 16:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:40.682 16:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:10:40.682 16:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:40.682 16:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.682 16:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.682 [2024-12-06 16:26:22.313996] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:40.682 [2024-12-06 16:26:22.314056] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:40.682 [2024-12-06 16:26:22.314075] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:40.682 [2024-12-06 16:26:22.314084] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:40.682 [2024-12-06 16:26:22.316401] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:40.682 [2024-12-06 16:26:22.316440] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:40.682 [2024-12-06 16:26:22.316519] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:40.682 [2024-12-06 16:26:22.316564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:40.682 [2024-12-06 16:26:22.316654] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:40.682 [2024-12-06 16:26:22.316662] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:40.682 [2024-12-06 16:26:22.316877] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:40.682 [2024-12-06 16:26:22.317007] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:40.682 [2024-12-06 16:26:22.317020] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:10:40.682 [2024-12-06 16:26:22.317127] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:40.682 pt2 00:10:40.682 16:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.682 16:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:40.682 16:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:40.682 16:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:40.682 16:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:40.683 16:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:40.683 16:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:40.683 16:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.683 16:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.683 16:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.683 16:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.683 16:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.683 16:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:40.683 16:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.683 16:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.683 16:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.683 16:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.683 "name": "raid_bdev1", 00:10:40.683 "uuid": "6187746e-c131-4473-b74a-59ec1f0d267a", 00:10:40.683 "strip_size_kb": 0, 00:10:40.683 "state": "online", 00:10:40.683 "raid_level": "raid1", 00:10:40.683 "superblock": true, 00:10:40.683 "num_base_bdevs": 2, 00:10:40.683 "num_base_bdevs_discovered": 1, 00:10:40.683 "num_base_bdevs_operational": 1, 00:10:40.683 "base_bdevs_list": [ 00:10:40.683 { 00:10:40.683 "name": null, 00:10:40.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.683 "is_configured": false, 00:10:40.683 "data_offset": 2048, 00:10:40.683 "data_size": 63488 00:10:40.683 }, 00:10:40.683 { 00:10:40.683 "name": "pt2", 00:10:40.683 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:40.683 "is_configured": true, 00:10:40.683 "data_offset": 2048, 00:10:40.683 "data_size": 63488 00:10:40.683 } 00:10:40.683 ] 00:10:40.683 }' 00:10:40.683 16:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.683 16:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.941 16:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:40.941 16:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.941 16:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.200 [2024-12-06 16:26:22.781281] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:41.200 [2024-12-06 16:26:22.781388] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:41.200 [2024-12-06 16:26:22.781500] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:41.200 [2024-12-06 16:26:22.781575] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:41.200 [2024-12-06 16:26:22.781646] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:10:41.200 16:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.200 16:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.200 16:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.200 16:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.200 16:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:41.200 16:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.200 16:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:41.200 16:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:41.200 16:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:10:41.200 16:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:41.200 16:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.200 16:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.200 [2024-12-06 16:26:22.845117] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:41.200 [2024-12-06 16:26:22.845200] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:41.200 [2024-12-06 16:26:22.845227] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:10:41.200 [2024-12-06 16:26:22.845241] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:41.200 [2024-12-06 16:26:22.847466] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:41.200 [2024-12-06 16:26:22.847588] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:41.200 [2024-12-06 16:26:22.847681] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:41.200 [2024-12-06 16:26:22.847732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:41.200 [2024-12-06 16:26:22.847855] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:41.200 [2024-12-06 16:26:22.847872] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:41.200 [2024-12-06 16:26:22.847890] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:10:41.200 [2024-12-06 16:26:22.847943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:41.200 [2024-12-06 16:26:22.848075] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:10:41.200 [2024-12-06 16:26:22.848095] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:41.200 [2024-12-06 16:26:22.848378] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:41.200 [2024-12-06 16:26:22.848514] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:10:41.200 [2024-12-06 16:26:22.848530] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:10:41.200 [2024-12-06 16:26:22.848655] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:41.200 pt1 00:10:41.200 16:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.200 16:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:10:41.200 16:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:41.200 16:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:41.200 16:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:41.200 16:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:41.200 16:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:41.200 16:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:41.200 16:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.200 16:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.200 16:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.200 16:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.200 16:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.200 16:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.200 16:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.200 16:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:41.200 16:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.200 16:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.200 "name": "raid_bdev1", 00:10:41.200 "uuid": "6187746e-c131-4473-b74a-59ec1f0d267a", 00:10:41.200 "strip_size_kb": 0, 00:10:41.200 "state": "online", 00:10:41.200 "raid_level": "raid1", 00:10:41.200 "superblock": true, 00:10:41.200 "num_base_bdevs": 2, 00:10:41.200 "num_base_bdevs_discovered": 1, 00:10:41.200 "num_base_bdevs_operational": 1, 00:10:41.200 "base_bdevs_list": [ 00:10:41.200 { 00:10:41.200 "name": null, 00:10:41.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.200 "is_configured": false, 00:10:41.200 "data_offset": 2048, 00:10:41.200 "data_size": 63488 00:10:41.200 }, 00:10:41.200 { 00:10:41.200 "name": "pt2", 00:10:41.200 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:41.200 "is_configured": true, 00:10:41.200 "data_offset": 2048, 00:10:41.200 "data_size": 63488 00:10:41.200 } 00:10:41.200 ] 00:10:41.200 }' 00:10:41.200 16:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.200 16:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.766 16:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:41.766 16:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:41.766 16:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.766 16:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.766 16:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.766 16:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:41.766 16:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:41.766 16:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.766 16:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.766 16:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:41.766 [2024-12-06 16:26:23.368522] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:41.766 16:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.766 16:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 6187746e-c131-4473-b74a-59ec1f0d267a '!=' 6187746e-c131-4473-b74a-59ec1f0d267a ']' 00:10:41.766 16:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74830 00:10:41.766 16:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74830 ']' 00:10:41.766 16:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74830 00:10:41.766 16:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:41.766 16:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:41.766 16:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74830 00:10:41.766 killing process with pid 74830 00:10:41.766 16:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:41.766 16:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:41.766 16:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74830' 00:10:41.766 16:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74830 00:10:41.766 [2024-12-06 16:26:23.456688] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:41.766 [2024-12-06 16:26:23.456797] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:41.766 16:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74830 00:10:41.766 [2024-12-06 16:26:23.456848] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:41.766 [2024-12-06 16:26:23.456858] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:10:41.766 [2024-12-06 16:26:23.480481] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:42.025 16:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:42.025 00:10:42.025 real 0m4.812s 00:10:42.025 user 0m7.815s 00:10:42.025 sys 0m1.036s 00:10:42.025 16:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:42.025 ************************************ 00:10:42.025 END TEST raid_superblock_test 00:10:42.025 ************************************ 00:10:42.025 16:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.025 16:26:23 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:10:42.025 16:26:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:42.025 16:26:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:42.025 16:26:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:42.025 ************************************ 00:10:42.025 START TEST raid_read_error_test 00:10:42.025 ************************************ 00:10:42.025 16:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:10:42.025 16:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:42.025 16:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:42.025 16:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:42.025 16:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:42.025 16:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:42.025 16:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:42.025 16:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:42.025 16:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:42.025 16:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:42.025 16:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:42.025 16:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:42.025 16:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:42.025 16:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:42.025 16:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:42.025 16:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:42.025 16:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:42.025 16:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:42.025 16:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:42.025 16:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:42.025 16:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:42.025 16:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:42.025 16:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.wHyNM8VfgC 00:10:42.025 16:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75144 00:10:42.025 16:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:42.025 16:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75144 00:10:42.025 16:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75144 ']' 00:10:42.025 16:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.025 16:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:42.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.025 16:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.025 16:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:42.025 16:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.284 [2024-12-06 16:26:23.886728] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:10:42.284 [2024-12-06 16:26:23.886869] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75144 ] 00:10:42.284 [2024-12-06 16:26:24.061515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.284 [2024-12-06 16:26:24.091700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.544 [2024-12-06 16:26:24.136495] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:42.544 [2024-12-06 16:26:24.136536] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:43.112 16:26:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:43.112 16:26:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:43.112 16:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:43.112 16:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:43.112 16:26:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.112 16:26:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.112 BaseBdev1_malloc 00:10:43.112 16:26:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.112 16:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:43.112 16:26:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.112 16:26:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.112 true 00:10:43.112 16:26:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.112 16:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:43.112 16:26:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.112 16:26:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.112 [2024-12-06 16:26:24.773776] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:43.112 [2024-12-06 16:26:24.773840] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.112 [2024-12-06 16:26:24.773864] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:43.112 [2024-12-06 16:26:24.773874] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.112 [2024-12-06 16:26:24.776126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.112 [2024-12-06 16:26:24.776166] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:43.112 BaseBdev1 00:10:43.112 16:26:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.112 16:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:43.112 16:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:43.112 16:26:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.112 16:26:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.112 BaseBdev2_malloc 00:10:43.112 16:26:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.112 16:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:43.112 16:26:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.112 16:26:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.112 true 00:10:43.112 16:26:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.112 16:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:43.112 16:26:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.112 16:26:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.112 [2024-12-06 16:26:24.814976] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:43.112 [2024-12-06 16:26:24.815054] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.112 [2024-12-06 16:26:24.815074] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:43.112 [2024-12-06 16:26:24.815083] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.112 [2024-12-06 16:26:24.817345] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.112 [2024-12-06 16:26:24.817386] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:43.112 BaseBdev2 00:10:43.112 16:26:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.112 16:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:43.112 16:26:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.112 16:26:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.112 [2024-12-06 16:26:24.827063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:43.112 [2024-12-06 16:26:24.829076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:43.112 [2024-12-06 16:26:24.829318] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:43.112 [2024-12-06 16:26:24.829341] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:43.112 [2024-12-06 16:26:24.829639] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:43.112 [2024-12-06 16:26:24.829813] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:43.112 [2024-12-06 16:26:24.829830] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:10:43.112 [2024-12-06 16:26:24.829979] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:43.112 16:26:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.112 16:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:43.112 16:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:43.112 16:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:43.112 16:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:43.112 16:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:43.112 16:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:43.112 16:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.112 16:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.112 16:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.112 16:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.112 16:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.112 16:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:43.112 16:26:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.112 16:26:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.112 16:26:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.112 16:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.112 "name": "raid_bdev1", 00:10:43.112 "uuid": "3e927687-5402-406a-b36b-1d12e1a4fd63", 00:10:43.112 "strip_size_kb": 0, 00:10:43.112 "state": "online", 00:10:43.112 "raid_level": "raid1", 00:10:43.112 "superblock": true, 00:10:43.112 "num_base_bdevs": 2, 00:10:43.112 "num_base_bdevs_discovered": 2, 00:10:43.112 "num_base_bdevs_operational": 2, 00:10:43.112 "base_bdevs_list": [ 00:10:43.112 { 00:10:43.112 "name": "BaseBdev1", 00:10:43.112 "uuid": "8baa46cd-639b-54d9-b28b-ba9a9c1f0ad8", 00:10:43.112 "is_configured": true, 00:10:43.112 "data_offset": 2048, 00:10:43.112 "data_size": 63488 00:10:43.112 }, 00:10:43.112 { 00:10:43.112 "name": "BaseBdev2", 00:10:43.112 "uuid": "b8bf076c-3d85-568a-a610-6a4002324020", 00:10:43.112 "is_configured": true, 00:10:43.112 "data_offset": 2048, 00:10:43.112 "data_size": 63488 00:10:43.112 } 00:10:43.112 ] 00:10:43.112 }' 00:10:43.112 16:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.112 16:26:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.678 16:26:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:43.678 16:26:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:43.678 [2024-12-06 16:26:25.374433] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:44.612 16:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:44.612 16:26:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.612 16:26:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.612 16:26:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.612 16:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:44.612 16:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:44.612 16:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:44.612 16:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:10:44.612 16:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:44.612 16:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:44.612 16:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:44.612 16:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:44.612 16:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:44.612 16:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:44.612 16:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.612 16:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.612 16:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.612 16:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.612 16:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.612 16:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:44.612 16:26:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.612 16:26:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.612 16:26:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.612 16:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.612 "name": "raid_bdev1", 00:10:44.612 "uuid": "3e927687-5402-406a-b36b-1d12e1a4fd63", 00:10:44.612 "strip_size_kb": 0, 00:10:44.612 "state": "online", 00:10:44.612 "raid_level": "raid1", 00:10:44.612 "superblock": true, 00:10:44.612 "num_base_bdevs": 2, 00:10:44.612 "num_base_bdevs_discovered": 2, 00:10:44.612 "num_base_bdevs_operational": 2, 00:10:44.612 "base_bdevs_list": [ 00:10:44.612 { 00:10:44.612 "name": "BaseBdev1", 00:10:44.612 "uuid": "8baa46cd-639b-54d9-b28b-ba9a9c1f0ad8", 00:10:44.612 "is_configured": true, 00:10:44.612 "data_offset": 2048, 00:10:44.612 "data_size": 63488 00:10:44.612 }, 00:10:44.612 { 00:10:44.612 "name": "BaseBdev2", 00:10:44.612 "uuid": "b8bf076c-3d85-568a-a610-6a4002324020", 00:10:44.612 "is_configured": true, 00:10:44.612 "data_offset": 2048, 00:10:44.612 "data_size": 63488 00:10:44.612 } 00:10:44.612 ] 00:10:44.612 }' 00:10:44.612 16:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.612 16:26:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.183 16:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:45.183 16:26:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.183 16:26:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.183 [2024-12-06 16:26:26.766374] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:45.183 [2024-12-06 16:26:26.766410] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:45.183 [2024-12-06 16:26:26.769336] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:45.183 [2024-12-06 16:26:26.769400] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:45.183 [2024-12-06 16:26:26.769498] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:45.183 [2024-12-06 16:26:26.769519] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:10:45.183 16:26:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.183 { 00:10:45.183 "results": [ 00:10:45.183 { 00:10:45.183 "job": "raid_bdev1", 00:10:45.183 "core_mask": "0x1", 00:10:45.183 "workload": "randrw", 00:10:45.183 "percentage": 50, 00:10:45.183 "status": "finished", 00:10:45.183 "queue_depth": 1, 00:10:45.183 "io_size": 131072, 00:10:45.183 "runtime": 1.392824, 00:10:45.183 "iops": 18057.557882402947, 00:10:45.183 "mibps": 2257.1947353003684, 00:10:45.183 "io_failed": 0, 00:10:45.183 "io_timeout": 0, 00:10:45.183 "avg_latency_us": 52.62637981005209, 00:10:45.183 "min_latency_us": 24.370305676855896, 00:10:45.183 "max_latency_us": 1430.9170305676855 00:10:45.183 } 00:10:45.183 ], 00:10:45.183 "core_count": 1 00:10:45.183 } 00:10:45.183 16:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75144 00:10:45.183 16:26:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75144 ']' 00:10:45.183 16:26:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75144 00:10:45.183 16:26:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:45.183 16:26:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:45.183 16:26:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75144 00:10:45.183 16:26:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:45.183 16:26:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:45.183 killing process with pid 75144 00:10:45.183 16:26:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75144' 00:10:45.183 16:26:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75144 00:10:45.183 [2024-12-06 16:26:26.815739] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:45.183 16:26:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75144 00:10:45.183 [2024-12-06 16:26:26.832048] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:45.442 16:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:45.442 16:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.wHyNM8VfgC 00:10:45.442 16:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:45.442 16:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:45.442 16:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:45.442 16:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:45.442 16:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:45.442 16:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:45.442 00:10:45.442 real 0m3.270s 00:10:45.442 user 0m4.196s 00:10:45.442 sys 0m0.545s 00:10:45.442 16:26:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:45.442 16:26:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.442 ************************************ 00:10:45.442 END TEST raid_read_error_test 00:10:45.442 ************************************ 00:10:45.442 16:26:27 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:10:45.442 16:26:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:45.442 16:26:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:45.442 16:26:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:45.442 ************************************ 00:10:45.442 START TEST raid_write_error_test 00:10:45.442 ************************************ 00:10:45.442 16:26:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:10:45.442 16:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:45.442 16:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:45.442 16:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:45.442 16:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:45.442 16:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:45.442 16:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:45.442 16:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:45.442 16:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:45.442 16:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:45.442 16:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:45.442 16:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:45.442 16:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:45.442 16:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:45.442 16:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:45.442 16:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:45.442 16:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:45.442 16:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:45.442 16:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:45.442 16:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:45.442 16:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:45.442 16:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:45.442 16:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.y8BXnwPDGF 00:10:45.442 16:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75278 00:10:45.442 16:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:45.442 16:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75278 00:10:45.442 16:26:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75278 ']' 00:10:45.442 16:26:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.442 16:26:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:45.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.442 16:26:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.442 16:26:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:45.442 16:26:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.442 [2024-12-06 16:26:27.222553] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:10:45.442 [2024-12-06 16:26:27.222708] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75278 ] 00:10:45.701 [2024-12-06 16:26:27.370906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.701 [2024-12-06 16:26:27.401139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.701 [2024-12-06 16:26:27.445258] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:45.701 [2024-12-06 16:26:27.445297] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:46.269 16:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:46.269 16:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:46.269 16:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:46.269 16:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:46.269 16:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.269 16:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.528 BaseBdev1_malloc 00:10:46.528 16:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.528 16:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:46.528 16:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.528 16:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.528 true 00:10:46.528 16:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.528 16:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:46.528 16:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.528 16:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.528 [2024-12-06 16:26:28.126605] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:46.528 [2024-12-06 16:26:28.126675] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.528 [2024-12-06 16:26:28.126694] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:46.528 [2024-12-06 16:26:28.126704] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.528 [2024-12-06 16:26:28.129011] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.528 [2024-12-06 16:26:28.129061] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:46.528 BaseBdev1 00:10:46.528 16:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.528 16:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:46.528 16:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:46.528 16:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.528 16:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.528 BaseBdev2_malloc 00:10:46.528 16:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.528 16:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:46.528 16:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.528 16:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.528 true 00:10:46.528 16:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.528 16:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:46.528 16:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.528 16:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.528 [2024-12-06 16:26:28.167743] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:46.528 [2024-12-06 16:26:28.167796] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.528 [2024-12-06 16:26:28.167815] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:46.528 [2024-12-06 16:26:28.167824] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.528 [2024-12-06 16:26:28.170045] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.528 [2024-12-06 16:26:28.170083] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:46.529 BaseBdev2 00:10:46.529 16:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.529 16:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:46.529 16:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.529 16:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.529 [2024-12-06 16:26:28.179777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:46.529 [2024-12-06 16:26:28.181833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:46.529 [2024-12-06 16:26:28.182017] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:46.529 [2024-12-06 16:26:28.182031] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:46.529 [2024-12-06 16:26:28.182291] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:46.529 [2024-12-06 16:26:28.182451] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:46.529 [2024-12-06 16:26:28.182472] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:10:46.529 [2024-12-06 16:26:28.182617] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:46.529 16:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.529 16:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:46.529 16:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:46.529 16:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:46.529 16:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:46.529 16:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:46.529 16:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:46.529 16:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.529 16:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.529 16:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.529 16:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.529 16:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.529 16:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:46.529 16:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.529 16:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.529 16:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.529 16:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.529 "name": "raid_bdev1", 00:10:46.529 "uuid": "6209c410-7498-457a-b051-50be488e208c", 00:10:46.529 "strip_size_kb": 0, 00:10:46.529 "state": "online", 00:10:46.529 "raid_level": "raid1", 00:10:46.529 "superblock": true, 00:10:46.529 "num_base_bdevs": 2, 00:10:46.529 "num_base_bdevs_discovered": 2, 00:10:46.529 "num_base_bdevs_operational": 2, 00:10:46.529 "base_bdevs_list": [ 00:10:46.529 { 00:10:46.529 "name": "BaseBdev1", 00:10:46.529 "uuid": "ccab6dd5-b4ef-50a6-8878-0b0f96d03e4d", 00:10:46.529 "is_configured": true, 00:10:46.529 "data_offset": 2048, 00:10:46.529 "data_size": 63488 00:10:46.529 }, 00:10:46.529 { 00:10:46.529 "name": "BaseBdev2", 00:10:46.529 "uuid": "7b950dba-98ea-5b01-ae7b-8c8607f3ebc9", 00:10:46.529 "is_configured": true, 00:10:46.529 "data_offset": 2048, 00:10:46.529 "data_size": 63488 00:10:46.529 } 00:10:46.529 ] 00:10:46.529 }' 00:10:46.529 16:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.529 16:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.110 16:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:47.110 16:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:47.110 [2024-12-06 16:26:28.707230] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:48.049 16:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:48.049 16:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.049 16:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.049 [2024-12-06 16:26:29.644145] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:48.049 [2024-12-06 16:26:29.644228] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:48.049 [2024-12-06 16:26:29.644478] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:10:48.049 16:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.049 16:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:48.049 16:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:48.049 16:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:48.049 16:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:10:48.049 16:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:48.049 16:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:48.049 16:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:48.049 16:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:48.049 16:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:48.049 16:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:48.049 16:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.049 16:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.049 16:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.049 16:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.049 16:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.049 16:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:48.049 16:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.049 16:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.049 16:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.049 16:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.049 "name": "raid_bdev1", 00:10:48.049 "uuid": "6209c410-7498-457a-b051-50be488e208c", 00:10:48.049 "strip_size_kb": 0, 00:10:48.049 "state": "online", 00:10:48.049 "raid_level": "raid1", 00:10:48.049 "superblock": true, 00:10:48.049 "num_base_bdevs": 2, 00:10:48.049 "num_base_bdevs_discovered": 1, 00:10:48.049 "num_base_bdevs_operational": 1, 00:10:48.049 "base_bdevs_list": [ 00:10:48.049 { 00:10:48.049 "name": null, 00:10:48.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.049 "is_configured": false, 00:10:48.049 "data_offset": 0, 00:10:48.049 "data_size": 63488 00:10:48.049 }, 00:10:48.049 { 00:10:48.049 "name": "BaseBdev2", 00:10:48.049 "uuid": "7b950dba-98ea-5b01-ae7b-8c8607f3ebc9", 00:10:48.049 "is_configured": true, 00:10:48.049 "data_offset": 2048, 00:10:48.049 "data_size": 63488 00:10:48.049 } 00:10:48.049 ] 00:10:48.049 }' 00:10:48.049 16:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.049 16:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.308 16:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:48.308 16:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.308 16:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.308 [2024-12-06 16:26:30.102104] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:48.308 [2024-12-06 16:26:30.102232] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:48.308 [2024-12-06 16:26:30.104951] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:48.308 [2024-12-06 16:26:30.105051] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:48.308 [2024-12-06 16:26:30.105158] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:48.308 [2024-12-06 16:26:30.105231] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:10:48.308 { 00:10:48.308 "results": [ 00:10:48.308 { 00:10:48.308 "job": "raid_bdev1", 00:10:48.308 "core_mask": "0x1", 00:10:48.308 "workload": "randrw", 00:10:48.308 "percentage": 50, 00:10:48.308 "status": "finished", 00:10:48.308 "queue_depth": 1, 00:10:48.308 "io_size": 131072, 00:10:48.308 "runtime": 1.395838, 00:10:48.308 "iops": 21404.346349648025, 00:10:48.308 "mibps": 2675.543293706003, 00:10:48.309 "io_failed": 0, 00:10:48.309 "io_timeout": 0, 00:10:48.309 "avg_latency_us": 43.98357036776548, 00:10:48.309 "min_latency_us": 22.91703056768559, 00:10:48.309 "max_latency_us": 1452.380786026201 00:10:48.309 } 00:10:48.309 ], 00:10:48.309 "core_count": 1 00:10:48.309 } 00:10:48.309 16:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.309 16:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75278 00:10:48.309 16:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75278 ']' 00:10:48.309 16:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75278 00:10:48.309 16:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:48.309 16:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:48.309 16:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75278 00:10:48.309 16:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:48.309 16:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:48.309 16:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75278' 00:10:48.309 killing process with pid 75278 00:10:48.309 16:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75278 00:10:48.309 [2024-12-06 16:26:30.135846] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:48.309 16:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75278 00:10:48.568 [2024-12-06 16:26:30.151928] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:48.568 16:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.y8BXnwPDGF 00:10:48.568 16:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:48.568 16:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:48.568 16:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:48.568 16:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:48.568 16:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:48.568 16:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:48.568 16:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:48.568 00:10:48.568 real 0m3.246s 00:10:48.568 user 0m4.146s 00:10:48.568 sys 0m0.510s 00:10:48.568 ************************************ 00:10:48.568 END TEST raid_write_error_test 00:10:48.568 ************************************ 00:10:48.568 16:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:48.568 16:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.827 16:26:30 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:48.827 16:26:30 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:48.827 16:26:30 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:10:48.827 16:26:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:48.827 16:26:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:48.827 16:26:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:48.827 ************************************ 00:10:48.827 START TEST raid_state_function_test 00:10:48.827 ************************************ 00:10:48.827 16:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:10:48.827 16:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:48.827 16:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:48.827 16:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:48.827 16:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:48.827 16:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:48.827 16:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:48.827 16:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:48.827 16:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:48.827 16:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:48.827 16:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:48.827 16:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:48.827 16:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:48.827 16:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:48.827 16:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:48.827 16:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:48.827 16:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:48.827 16:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:48.827 16:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:48.827 16:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:48.827 16:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:48.827 16:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:48.827 16:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:48.827 16:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:48.827 16:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:48.827 16:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:48.827 16:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:48.827 16:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=75405 00:10:48.827 16:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75405' 00:10:48.827 16:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:48.827 Process raid pid: 75405 00:10:48.827 16:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 75405 00:10:48.827 16:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 75405 ']' 00:10:48.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.827 16:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.827 16:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:48.827 16:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.827 16:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:48.827 16:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.827 [2024-12-06 16:26:30.546971] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:10:48.828 [2024-12-06 16:26:30.547237] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:49.086 [2024-12-06 16:26:30.728254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.086 [2024-12-06 16:26:30.757926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.086 [2024-12-06 16:26:30.802300] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:49.086 [2024-12-06 16:26:30.802341] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:49.661 16:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:49.661 16:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:49.661 16:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:49.661 16:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.661 16:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.661 [2024-12-06 16:26:31.437924] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:49.661 [2024-12-06 16:26:31.437986] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:49.661 [2024-12-06 16:26:31.438005] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:49.661 [2024-12-06 16:26:31.438017] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:49.661 [2024-12-06 16:26:31.438025] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:49.661 [2024-12-06 16:26:31.438037] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:49.661 16:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.661 16:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:49.661 16:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.661 16:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.661 16:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:49.661 16:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.661 16:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:49.661 16:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.661 16:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.661 16:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.661 16:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.661 16:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.661 16:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.661 16:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.661 16:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.661 16:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.661 16:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.661 "name": "Existed_Raid", 00:10:49.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.661 "strip_size_kb": 64, 00:10:49.661 "state": "configuring", 00:10:49.661 "raid_level": "raid0", 00:10:49.661 "superblock": false, 00:10:49.661 "num_base_bdevs": 3, 00:10:49.661 "num_base_bdevs_discovered": 0, 00:10:49.661 "num_base_bdevs_operational": 3, 00:10:49.661 "base_bdevs_list": [ 00:10:49.661 { 00:10:49.661 "name": "BaseBdev1", 00:10:49.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.661 "is_configured": false, 00:10:49.661 "data_offset": 0, 00:10:49.661 "data_size": 0 00:10:49.661 }, 00:10:49.661 { 00:10:49.661 "name": "BaseBdev2", 00:10:49.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.661 "is_configured": false, 00:10:49.661 "data_offset": 0, 00:10:49.661 "data_size": 0 00:10:49.661 }, 00:10:49.661 { 00:10:49.661 "name": "BaseBdev3", 00:10:49.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.661 "is_configured": false, 00:10:49.661 "data_offset": 0, 00:10:49.661 "data_size": 0 00:10:49.661 } 00:10:49.661 ] 00:10:49.661 }' 00:10:49.661 16:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.928 16:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.188 16:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:50.188 16:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.188 16:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.188 [2024-12-06 16:26:31.873118] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:50.188 [2024-12-06 16:26:31.873239] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:10:50.188 16:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.188 16:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:50.188 16:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.188 16:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.188 [2024-12-06 16:26:31.885118] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:50.188 [2024-12-06 16:26:31.885267] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:50.188 [2024-12-06 16:26:31.885300] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:50.188 [2024-12-06 16:26:31.885325] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:50.188 [2024-12-06 16:26:31.885344] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:50.188 [2024-12-06 16:26:31.885384] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:50.188 16:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.188 16:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:50.188 16:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.188 16:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.188 [2024-12-06 16:26:31.906667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:50.188 BaseBdev1 00:10:50.188 16:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.188 16:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:50.188 16:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:50.188 16:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:50.188 16:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:50.188 16:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:50.188 16:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:50.188 16:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:50.188 16:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.188 16:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.188 16:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.188 16:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:50.188 16:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.188 16:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.188 [ 00:10:50.188 { 00:10:50.188 "name": "BaseBdev1", 00:10:50.188 "aliases": [ 00:10:50.188 "7bf0b977-a45b-41cf-bf4d-f3cb5e2b2da1" 00:10:50.188 ], 00:10:50.188 "product_name": "Malloc disk", 00:10:50.188 "block_size": 512, 00:10:50.188 "num_blocks": 65536, 00:10:50.188 "uuid": "7bf0b977-a45b-41cf-bf4d-f3cb5e2b2da1", 00:10:50.188 "assigned_rate_limits": { 00:10:50.188 "rw_ios_per_sec": 0, 00:10:50.188 "rw_mbytes_per_sec": 0, 00:10:50.188 "r_mbytes_per_sec": 0, 00:10:50.188 "w_mbytes_per_sec": 0 00:10:50.188 }, 00:10:50.188 "claimed": true, 00:10:50.188 "claim_type": "exclusive_write", 00:10:50.188 "zoned": false, 00:10:50.188 "supported_io_types": { 00:10:50.188 "read": true, 00:10:50.188 "write": true, 00:10:50.188 "unmap": true, 00:10:50.188 "flush": true, 00:10:50.188 "reset": true, 00:10:50.188 "nvme_admin": false, 00:10:50.188 "nvme_io": false, 00:10:50.188 "nvme_io_md": false, 00:10:50.188 "write_zeroes": true, 00:10:50.188 "zcopy": true, 00:10:50.188 "get_zone_info": false, 00:10:50.188 "zone_management": false, 00:10:50.188 "zone_append": false, 00:10:50.188 "compare": false, 00:10:50.188 "compare_and_write": false, 00:10:50.188 "abort": true, 00:10:50.188 "seek_hole": false, 00:10:50.188 "seek_data": false, 00:10:50.188 "copy": true, 00:10:50.188 "nvme_iov_md": false 00:10:50.188 }, 00:10:50.188 "memory_domains": [ 00:10:50.188 { 00:10:50.188 "dma_device_id": "system", 00:10:50.188 "dma_device_type": 1 00:10:50.188 }, 00:10:50.188 { 00:10:50.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.188 "dma_device_type": 2 00:10:50.188 } 00:10:50.188 ], 00:10:50.188 "driver_specific": {} 00:10:50.188 } 00:10:50.188 ] 00:10:50.188 16:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.188 16:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:50.188 16:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:50.188 16:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.188 16:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.188 16:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:50.188 16:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.188 16:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:50.188 16:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.188 16:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.188 16:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.188 16:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.188 16:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.188 16:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.188 16:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.188 16:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.188 16:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.188 16:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.188 "name": "Existed_Raid", 00:10:50.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.188 "strip_size_kb": 64, 00:10:50.188 "state": "configuring", 00:10:50.188 "raid_level": "raid0", 00:10:50.188 "superblock": false, 00:10:50.188 "num_base_bdevs": 3, 00:10:50.188 "num_base_bdevs_discovered": 1, 00:10:50.188 "num_base_bdevs_operational": 3, 00:10:50.188 "base_bdevs_list": [ 00:10:50.188 { 00:10:50.188 "name": "BaseBdev1", 00:10:50.188 "uuid": "7bf0b977-a45b-41cf-bf4d-f3cb5e2b2da1", 00:10:50.188 "is_configured": true, 00:10:50.188 "data_offset": 0, 00:10:50.188 "data_size": 65536 00:10:50.188 }, 00:10:50.188 { 00:10:50.188 "name": "BaseBdev2", 00:10:50.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.188 "is_configured": false, 00:10:50.188 "data_offset": 0, 00:10:50.188 "data_size": 0 00:10:50.188 }, 00:10:50.188 { 00:10:50.188 "name": "BaseBdev3", 00:10:50.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.188 "is_configured": false, 00:10:50.188 "data_offset": 0, 00:10:50.188 "data_size": 0 00:10:50.188 } 00:10:50.188 ] 00:10:50.188 }' 00:10:50.188 16:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.188 16:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.758 16:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:50.758 16:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.758 16:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.758 [2024-12-06 16:26:32.397871] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:50.758 [2024-12-06 16:26:32.397938] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:10:50.758 16:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.758 16:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:50.758 16:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.758 16:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.758 [2024-12-06 16:26:32.405906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:50.758 [2024-12-06 16:26:32.407886] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:50.758 [2024-12-06 16:26:32.407942] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:50.758 [2024-12-06 16:26:32.407954] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:50.758 [2024-12-06 16:26:32.407986] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:50.758 16:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.758 16:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:50.758 16:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:50.758 16:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:50.758 16:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.758 16:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.758 16:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:50.758 16:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.758 16:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:50.758 16:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.758 16:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.758 16:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.758 16:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.758 16:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.758 16:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.758 16:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.758 16:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.758 16:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.758 16:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.758 "name": "Existed_Raid", 00:10:50.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.758 "strip_size_kb": 64, 00:10:50.758 "state": "configuring", 00:10:50.758 "raid_level": "raid0", 00:10:50.758 "superblock": false, 00:10:50.758 "num_base_bdevs": 3, 00:10:50.758 "num_base_bdevs_discovered": 1, 00:10:50.758 "num_base_bdevs_operational": 3, 00:10:50.758 "base_bdevs_list": [ 00:10:50.758 { 00:10:50.758 "name": "BaseBdev1", 00:10:50.758 "uuid": "7bf0b977-a45b-41cf-bf4d-f3cb5e2b2da1", 00:10:50.758 "is_configured": true, 00:10:50.758 "data_offset": 0, 00:10:50.758 "data_size": 65536 00:10:50.758 }, 00:10:50.758 { 00:10:50.758 "name": "BaseBdev2", 00:10:50.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.758 "is_configured": false, 00:10:50.758 "data_offset": 0, 00:10:50.758 "data_size": 0 00:10:50.758 }, 00:10:50.758 { 00:10:50.758 "name": "BaseBdev3", 00:10:50.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.758 "is_configured": false, 00:10:50.758 "data_offset": 0, 00:10:50.758 "data_size": 0 00:10:50.758 } 00:10:50.758 ] 00:10:50.758 }' 00:10:50.758 16:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.758 16:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.018 16:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:51.018 16:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.018 16:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.018 [2024-12-06 16:26:32.852654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:51.018 BaseBdev2 00:10:51.018 16:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.018 16:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:51.018 16:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:51.277 16:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:51.277 16:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:51.277 16:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:51.277 16:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:51.277 16:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:51.277 16:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.277 16:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.277 16:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.277 16:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:51.277 16:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.277 16:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.277 [ 00:10:51.277 { 00:10:51.277 "name": "BaseBdev2", 00:10:51.277 "aliases": [ 00:10:51.277 "cf128887-c1a1-42d2-8287-30ae87b9f4f4" 00:10:51.277 ], 00:10:51.277 "product_name": "Malloc disk", 00:10:51.277 "block_size": 512, 00:10:51.277 "num_blocks": 65536, 00:10:51.277 "uuid": "cf128887-c1a1-42d2-8287-30ae87b9f4f4", 00:10:51.277 "assigned_rate_limits": { 00:10:51.277 "rw_ios_per_sec": 0, 00:10:51.277 "rw_mbytes_per_sec": 0, 00:10:51.277 "r_mbytes_per_sec": 0, 00:10:51.277 "w_mbytes_per_sec": 0 00:10:51.277 }, 00:10:51.277 "claimed": true, 00:10:51.277 "claim_type": "exclusive_write", 00:10:51.277 "zoned": false, 00:10:51.277 "supported_io_types": { 00:10:51.277 "read": true, 00:10:51.277 "write": true, 00:10:51.277 "unmap": true, 00:10:51.277 "flush": true, 00:10:51.277 "reset": true, 00:10:51.277 "nvme_admin": false, 00:10:51.277 "nvme_io": false, 00:10:51.277 "nvme_io_md": false, 00:10:51.277 "write_zeroes": true, 00:10:51.277 "zcopy": true, 00:10:51.277 "get_zone_info": false, 00:10:51.277 "zone_management": false, 00:10:51.277 "zone_append": false, 00:10:51.277 "compare": false, 00:10:51.277 "compare_and_write": false, 00:10:51.277 "abort": true, 00:10:51.277 "seek_hole": false, 00:10:51.277 "seek_data": false, 00:10:51.277 "copy": true, 00:10:51.277 "nvme_iov_md": false 00:10:51.277 }, 00:10:51.277 "memory_domains": [ 00:10:51.277 { 00:10:51.277 "dma_device_id": "system", 00:10:51.278 "dma_device_type": 1 00:10:51.278 }, 00:10:51.278 { 00:10:51.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.278 "dma_device_type": 2 00:10:51.278 } 00:10:51.278 ], 00:10:51.278 "driver_specific": {} 00:10:51.278 } 00:10:51.278 ] 00:10:51.278 16:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.278 16:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:51.278 16:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:51.278 16:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:51.278 16:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:51.278 16:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.278 16:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.278 16:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:51.278 16:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.278 16:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:51.278 16:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.278 16:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.278 16:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.278 16:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.278 16:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.278 16:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.278 16:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.278 16:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.278 16:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.278 16:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.278 "name": "Existed_Raid", 00:10:51.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.278 "strip_size_kb": 64, 00:10:51.278 "state": "configuring", 00:10:51.278 "raid_level": "raid0", 00:10:51.278 "superblock": false, 00:10:51.278 "num_base_bdevs": 3, 00:10:51.278 "num_base_bdevs_discovered": 2, 00:10:51.278 "num_base_bdevs_operational": 3, 00:10:51.278 "base_bdevs_list": [ 00:10:51.278 { 00:10:51.278 "name": "BaseBdev1", 00:10:51.278 "uuid": "7bf0b977-a45b-41cf-bf4d-f3cb5e2b2da1", 00:10:51.278 "is_configured": true, 00:10:51.278 "data_offset": 0, 00:10:51.278 "data_size": 65536 00:10:51.278 }, 00:10:51.278 { 00:10:51.278 "name": "BaseBdev2", 00:10:51.278 "uuid": "cf128887-c1a1-42d2-8287-30ae87b9f4f4", 00:10:51.278 "is_configured": true, 00:10:51.278 "data_offset": 0, 00:10:51.278 "data_size": 65536 00:10:51.278 }, 00:10:51.278 { 00:10:51.278 "name": "BaseBdev3", 00:10:51.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.278 "is_configured": false, 00:10:51.278 "data_offset": 0, 00:10:51.278 "data_size": 0 00:10:51.278 } 00:10:51.278 ] 00:10:51.278 }' 00:10:51.278 16:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.278 16:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.536 16:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:51.536 16:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.536 16:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.795 [2024-12-06 16:26:33.391494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:51.795 [2024-12-06 16:26:33.391650] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:51.795 [2024-12-06 16:26:33.391689] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:51.795 [2024-12-06 16:26:33.392092] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:10:51.795 [2024-12-06 16:26:33.392332] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:51.795 [2024-12-06 16:26:33.392348] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:10:51.795 [2024-12-06 16:26:33.392622] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:51.795 BaseBdev3 00:10:51.795 16:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.795 16:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:51.795 16:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:51.795 16:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:51.795 16:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:51.795 16:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:51.795 16:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:51.795 16:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:51.795 16:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.795 16:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.795 16:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.795 16:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:51.795 16:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.795 16:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.795 [ 00:10:51.795 { 00:10:51.795 "name": "BaseBdev3", 00:10:51.795 "aliases": [ 00:10:51.795 "35c29b50-4f64-438d-a788-e9c7b74d2c93" 00:10:51.795 ], 00:10:51.795 "product_name": "Malloc disk", 00:10:51.795 "block_size": 512, 00:10:51.795 "num_blocks": 65536, 00:10:51.795 "uuid": "35c29b50-4f64-438d-a788-e9c7b74d2c93", 00:10:51.795 "assigned_rate_limits": { 00:10:51.795 "rw_ios_per_sec": 0, 00:10:51.795 "rw_mbytes_per_sec": 0, 00:10:51.795 "r_mbytes_per_sec": 0, 00:10:51.795 "w_mbytes_per_sec": 0 00:10:51.795 }, 00:10:51.795 "claimed": true, 00:10:51.795 "claim_type": "exclusive_write", 00:10:51.795 "zoned": false, 00:10:51.795 "supported_io_types": { 00:10:51.795 "read": true, 00:10:51.795 "write": true, 00:10:51.795 "unmap": true, 00:10:51.795 "flush": true, 00:10:51.795 "reset": true, 00:10:51.795 "nvme_admin": false, 00:10:51.795 "nvme_io": false, 00:10:51.795 "nvme_io_md": false, 00:10:51.795 "write_zeroes": true, 00:10:51.795 "zcopy": true, 00:10:51.795 "get_zone_info": false, 00:10:51.795 "zone_management": false, 00:10:51.795 "zone_append": false, 00:10:51.795 "compare": false, 00:10:51.795 "compare_and_write": false, 00:10:51.795 "abort": true, 00:10:51.795 "seek_hole": false, 00:10:51.795 "seek_data": false, 00:10:51.795 "copy": true, 00:10:51.795 "nvme_iov_md": false 00:10:51.795 }, 00:10:51.795 "memory_domains": [ 00:10:51.795 { 00:10:51.795 "dma_device_id": "system", 00:10:51.795 "dma_device_type": 1 00:10:51.795 }, 00:10:51.795 { 00:10:51.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.795 "dma_device_type": 2 00:10:51.795 } 00:10:51.795 ], 00:10:51.795 "driver_specific": {} 00:10:51.795 } 00:10:51.795 ] 00:10:51.795 16:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.795 16:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:51.795 16:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:51.795 16:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:51.795 16:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:51.795 16:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.795 16:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:51.795 16:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:51.795 16:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.795 16:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:51.795 16:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.795 16:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.795 16:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.795 16:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.795 16:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.795 16:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.795 16:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.795 16:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.795 16:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.795 16:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.795 "name": "Existed_Raid", 00:10:51.795 "uuid": "bf98e236-3bda-49dd-a28c-314f6f58d378", 00:10:51.795 "strip_size_kb": 64, 00:10:51.795 "state": "online", 00:10:51.795 "raid_level": "raid0", 00:10:51.795 "superblock": false, 00:10:51.795 "num_base_bdevs": 3, 00:10:51.795 "num_base_bdevs_discovered": 3, 00:10:51.796 "num_base_bdevs_operational": 3, 00:10:51.796 "base_bdevs_list": [ 00:10:51.796 { 00:10:51.796 "name": "BaseBdev1", 00:10:51.796 "uuid": "7bf0b977-a45b-41cf-bf4d-f3cb5e2b2da1", 00:10:51.796 "is_configured": true, 00:10:51.796 "data_offset": 0, 00:10:51.796 "data_size": 65536 00:10:51.796 }, 00:10:51.796 { 00:10:51.796 "name": "BaseBdev2", 00:10:51.796 "uuid": "cf128887-c1a1-42d2-8287-30ae87b9f4f4", 00:10:51.796 "is_configured": true, 00:10:51.796 "data_offset": 0, 00:10:51.796 "data_size": 65536 00:10:51.796 }, 00:10:51.796 { 00:10:51.796 "name": "BaseBdev3", 00:10:51.796 "uuid": "35c29b50-4f64-438d-a788-e9c7b74d2c93", 00:10:51.796 "is_configured": true, 00:10:51.796 "data_offset": 0, 00:10:51.796 "data_size": 65536 00:10:51.796 } 00:10:51.796 ] 00:10:51.796 }' 00:10:51.796 16:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.796 16:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.055 16:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:52.055 16:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:52.055 16:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:52.055 16:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:52.055 16:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:52.055 16:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:52.055 16:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:52.055 16:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:52.055 16:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.055 16:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.055 [2024-12-06 16:26:33.879048] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:52.314 16:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.314 16:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:52.314 "name": "Existed_Raid", 00:10:52.314 "aliases": [ 00:10:52.314 "bf98e236-3bda-49dd-a28c-314f6f58d378" 00:10:52.314 ], 00:10:52.314 "product_name": "Raid Volume", 00:10:52.314 "block_size": 512, 00:10:52.315 "num_blocks": 196608, 00:10:52.315 "uuid": "bf98e236-3bda-49dd-a28c-314f6f58d378", 00:10:52.315 "assigned_rate_limits": { 00:10:52.315 "rw_ios_per_sec": 0, 00:10:52.315 "rw_mbytes_per_sec": 0, 00:10:52.315 "r_mbytes_per_sec": 0, 00:10:52.315 "w_mbytes_per_sec": 0 00:10:52.315 }, 00:10:52.315 "claimed": false, 00:10:52.315 "zoned": false, 00:10:52.315 "supported_io_types": { 00:10:52.315 "read": true, 00:10:52.315 "write": true, 00:10:52.315 "unmap": true, 00:10:52.315 "flush": true, 00:10:52.315 "reset": true, 00:10:52.315 "nvme_admin": false, 00:10:52.315 "nvme_io": false, 00:10:52.315 "nvme_io_md": false, 00:10:52.315 "write_zeroes": true, 00:10:52.315 "zcopy": false, 00:10:52.315 "get_zone_info": false, 00:10:52.315 "zone_management": false, 00:10:52.315 "zone_append": false, 00:10:52.315 "compare": false, 00:10:52.315 "compare_and_write": false, 00:10:52.315 "abort": false, 00:10:52.315 "seek_hole": false, 00:10:52.315 "seek_data": false, 00:10:52.315 "copy": false, 00:10:52.315 "nvme_iov_md": false 00:10:52.315 }, 00:10:52.315 "memory_domains": [ 00:10:52.315 { 00:10:52.315 "dma_device_id": "system", 00:10:52.315 "dma_device_type": 1 00:10:52.315 }, 00:10:52.315 { 00:10:52.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.315 "dma_device_type": 2 00:10:52.315 }, 00:10:52.315 { 00:10:52.315 "dma_device_id": "system", 00:10:52.315 "dma_device_type": 1 00:10:52.315 }, 00:10:52.315 { 00:10:52.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.315 "dma_device_type": 2 00:10:52.315 }, 00:10:52.315 { 00:10:52.315 "dma_device_id": "system", 00:10:52.315 "dma_device_type": 1 00:10:52.315 }, 00:10:52.315 { 00:10:52.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.315 "dma_device_type": 2 00:10:52.315 } 00:10:52.315 ], 00:10:52.315 "driver_specific": { 00:10:52.315 "raid": { 00:10:52.315 "uuid": "bf98e236-3bda-49dd-a28c-314f6f58d378", 00:10:52.315 "strip_size_kb": 64, 00:10:52.315 "state": "online", 00:10:52.315 "raid_level": "raid0", 00:10:52.315 "superblock": false, 00:10:52.315 "num_base_bdevs": 3, 00:10:52.315 "num_base_bdevs_discovered": 3, 00:10:52.315 "num_base_bdevs_operational": 3, 00:10:52.315 "base_bdevs_list": [ 00:10:52.315 { 00:10:52.315 "name": "BaseBdev1", 00:10:52.315 "uuid": "7bf0b977-a45b-41cf-bf4d-f3cb5e2b2da1", 00:10:52.315 "is_configured": true, 00:10:52.315 "data_offset": 0, 00:10:52.315 "data_size": 65536 00:10:52.315 }, 00:10:52.315 { 00:10:52.315 "name": "BaseBdev2", 00:10:52.315 "uuid": "cf128887-c1a1-42d2-8287-30ae87b9f4f4", 00:10:52.315 "is_configured": true, 00:10:52.315 "data_offset": 0, 00:10:52.315 "data_size": 65536 00:10:52.315 }, 00:10:52.315 { 00:10:52.315 "name": "BaseBdev3", 00:10:52.315 "uuid": "35c29b50-4f64-438d-a788-e9c7b74d2c93", 00:10:52.315 "is_configured": true, 00:10:52.315 "data_offset": 0, 00:10:52.315 "data_size": 65536 00:10:52.315 } 00:10:52.315 ] 00:10:52.315 } 00:10:52.315 } 00:10:52.315 }' 00:10:52.315 16:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:52.315 16:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:52.315 BaseBdev2 00:10:52.315 BaseBdev3' 00:10:52.315 16:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.315 16:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:52.315 16:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.315 16:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.315 16:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:52.315 16:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.315 16:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.315 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.315 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:52.315 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:52.315 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.315 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:52.315 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.315 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.315 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.315 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.315 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:52.315 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:52.315 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.315 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:52.315 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.315 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.315 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.315 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.315 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:52.315 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:52.315 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:52.315 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.315 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.315 [2024-12-06 16:26:34.130344] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:52.315 [2024-12-06 16:26:34.130441] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:52.315 [2024-12-06 16:26:34.130541] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:52.315 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.315 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:52.315 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:52.315 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:52.315 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:52.315 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:52.315 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:10:52.315 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.315 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:52.315 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:52.315 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.315 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:52.315 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.315 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.315 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.315 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.575 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.575 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.575 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.575 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.575 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.575 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.575 "name": "Existed_Raid", 00:10:52.575 "uuid": "bf98e236-3bda-49dd-a28c-314f6f58d378", 00:10:52.575 "strip_size_kb": 64, 00:10:52.575 "state": "offline", 00:10:52.575 "raid_level": "raid0", 00:10:52.575 "superblock": false, 00:10:52.575 "num_base_bdevs": 3, 00:10:52.575 "num_base_bdevs_discovered": 2, 00:10:52.575 "num_base_bdevs_operational": 2, 00:10:52.575 "base_bdevs_list": [ 00:10:52.575 { 00:10:52.575 "name": null, 00:10:52.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.575 "is_configured": false, 00:10:52.575 "data_offset": 0, 00:10:52.575 "data_size": 65536 00:10:52.575 }, 00:10:52.575 { 00:10:52.575 "name": "BaseBdev2", 00:10:52.575 "uuid": "cf128887-c1a1-42d2-8287-30ae87b9f4f4", 00:10:52.575 "is_configured": true, 00:10:52.575 "data_offset": 0, 00:10:52.575 "data_size": 65536 00:10:52.575 }, 00:10:52.575 { 00:10:52.575 "name": "BaseBdev3", 00:10:52.575 "uuid": "35c29b50-4f64-438d-a788-e9c7b74d2c93", 00:10:52.575 "is_configured": true, 00:10:52.575 "data_offset": 0, 00:10:52.575 "data_size": 65536 00:10:52.575 } 00:10:52.575 ] 00:10:52.575 }' 00:10:52.575 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.575 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.835 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:52.835 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:52.835 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.835 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.835 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.835 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:52.835 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.095 [2024-12-06 16:26:34.677359] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.095 [2024-12-06 16:26:34.754403] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:53.095 [2024-12-06 16:26:34.754605] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.095 BaseBdev2 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.095 [ 00:10:53.095 { 00:10:53.095 "name": "BaseBdev2", 00:10:53.095 "aliases": [ 00:10:53.095 "10f45881-79ad-40ac-b42d-0a241f30617a" 00:10:53.095 ], 00:10:53.095 "product_name": "Malloc disk", 00:10:53.095 "block_size": 512, 00:10:53.095 "num_blocks": 65536, 00:10:53.095 "uuid": "10f45881-79ad-40ac-b42d-0a241f30617a", 00:10:53.095 "assigned_rate_limits": { 00:10:53.095 "rw_ios_per_sec": 0, 00:10:53.095 "rw_mbytes_per_sec": 0, 00:10:53.095 "r_mbytes_per_sec": 0, 00:10:53.095 "w_mbytes_per_sec": 0 00:10:53.095 }, 00:10:53.095 "claimed": false, 00:10:53.095 "zoned": false, 00:10:53.095 "supported_io_types": { 00:10:53.095 "read": true, 00:10:53.095 "write": true, 00:10:53.095 "unmap": true, 00:10:53.095 "flush": true, 00:10:53.095 "reset": true, 00:10:53.095 "nvme_admin": false, 00:10:53.095 "nvme_io": false, 00:10:53.095 "nvme_io_md": false, 00:10:53.095 "write_zeroes": true, 00:10:53.095 "zcopy": true, 00:10:53.095 "get_zone_info": false, 00:10:53.095 "zone_management": false, 00:10:53.095 "zone_append": false, 00:10:53.095 "compare": false, 00:10:53.095 "compare_and_write": false, 00:10:53.095 "abort": true, 00:10:53.095 "seek_hole": false, 00:10:53.095 "seek_data": false, 00:10:53.095 "copy": true, 00:10:53.095 "nvme_iov_md": false 00:10:53.095 }, 00:10:53.095 "memory_domains": [ 00:10:53.095 { 00:10:53.095 "dma_device_id": "system", 00:10:53.095 "dma_device_type": 1 00:10:53.095 }, 00:10:53.095 { 00:10:53.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.095 "dma_device_type": 2 00:10:53.095 } 00:10:53.095 ], 00:10:53.095 "driver_specific": {} 00:10:53.095 } 00:10:53.095 ] 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.095 BaseBdev3 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.095 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.355 [ 00:10:53.355 { 00:10:53.355 "name": "BaseBdev3", 00:10:53.355 "aliases": [ 00:10:53.355 "e072b494-04b4-4717-ad4f-3d842c7663b4" 00:10:53.355 ], 00:10:53.355 "product_name": "Malloc disk", 00:10:53.355 "block_size": 512, 00:10:53.355 "num_blocks": 65536, 00:10:53.355 "uuid": "e072b494-04b4-4717-ad4f-3d842c7663b4", 00:10:53.355 "assigned_rate_limits": { 00:10:53.355 "rw_ios_per_sec": 0, 00:10:53.355 "rw_mbytes_per_sec": 0, 00:10:53.355 "r_mbytes_per_sec": 0, 00:10:53.355 "w_mbytes_per_sec": 0 00:10:53.355 }, 00:10:53.355 "claimed": false, 00:10:53.355 "zoned": false, 00:10:53.355 "supported_io_types": { 00:10:53.355 "read": true, 00:10:53.355 "write": true, 00:10:53.355 "unmap": true, 00:10:53.355 "flush": true, 00:10:53.355 "reset": true, 00:10:53.355 "nvme_admin": false, 00:10:53.355 "nvme_io": false, 00:10:53.355 "nvme_io_md": false, 00:10:53.355 "write_zeroes": true, 00:10:53.355 "zcopy": true, 00:10:53.355 "get_zone_info": false, 00:10:53.355 "zone_management": false, 00:10:53.355 "zone_append": false, 00:10:53.355 "compare": false, 00:10:53.355 "compare_and_write": false, 00:10:53.355 "abort": true, 00:10:53.355 "seek_hole": false, 00:10:53.355 "seek_data": false, 00:10:53.355 "copy": true, 00:10:53.355 "nvme_iov_md": false 00:10:53.355 }, 00:10:53.355 "memory_domains": [ 00:10:53.355 { 00:10:53.355 "dma_device_id": "system", 00:10:53.355 "dma_device_type": 1 00:10:53.355 }, 00:10:53.355 { 00:10:53.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.355 "dma_device_type": 2 00:10:53.355 } 00:10:53.355 ], 00:10:53.355 "driver_specific": {} 00:10:53.355 } 00:10:53.355 ] 00:10:53.355 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.355 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:53.355 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:53.355 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:53.355 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:53.355 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.355 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.355 [2024-12-06 16:26:34.955625] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:53.355 [2024-12-06 16:26:34.955823] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:53.355 [2024-12-06 16:26:34.955905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:53.355 [2024-12-06 16:26:34.958296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:53.355 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.355 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:53.355 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.355 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.355 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:53.355 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.355 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:53.355 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.355 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.355 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.355 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.355 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.355 16:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.355 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.355 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.355 16:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.355 16:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.355 "name": "Existed_Raid", 00:10:53.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.355 "strip_size_kb": 64, 00:10:53.355 "state": "configuring", 00:10:53.355 "raid_level": "raid0", 00:10:53.355 "superblock": false, 00:10:53.355 "num_base_bdevs": 3, 00:10:53.355 "num_base_bdevs_discovered": 2, 00:10:53.355 "num_base_bdevs_operational": 3, 00:10:53.355 "base_bdevs_list": [ 00:10:53.355 { 00:10:53.355 "name": "BaseBdev1", 00:10:53.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.355 "is_configured": false, 00:10:53.356 "data_offset": 0, 00:10:53.356 "data_size": 0 00:10:53.356 }, 00:10:53.356 { 00:10:53.356 "name": "BaseBdev2", 00:10:53.356 "uuid": "10f45881-79ad-40ac-b42d-0a241f30617a", 00:10:53.356 "is_configured": true, 00:10:53.356 "data_offset": 0, 00:10:53.356 "data_size": 65536 00:10:53.356 }, 00:10:53.356 { 00:10:53.356 "name": "BaseBdev3", 00:10:53.356 "uuid": "e072b494-04b4-4717-ad4f-3d842c7663b4", 00:10:53.356 "is_configured": true, 00:10:53.356 "data_offset": 0, 00:10:53.356 "data_size": 65536 00:10:53.356 } 00:10:53.356 ] 00:10:53.356 }' 00:10:53.356 16:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.356 16:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.615 16:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:53.615 16:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.615 16:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.615 [2024-12-06 16:26:35.359021] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:53.615 16:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.615 16:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:53.615 16:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.615 16:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.615 16:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:53.615 16:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.615 16:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:53.615 16:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.615 16:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.615 16:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.615 16:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.615 16:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.615 16:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.615 16:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.615 16:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.615 16:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.615 16:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.615 "name": "Existed_Raid", 00:10:53.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.615 "strip_size_kb": 64, 00:10:53.615 "state": "configuring", 00:10:53.615 "raid_level": "raid0", 00:10:53.615 "superblock": false, 00:10:53.615 "num_base_bdevs": 3, 00:10:53.615 "num_base_bdevs_discovered": 1, 00:10:53.615 "num_base_bdevs_operational": 3, 00:10:53.615 "base_bdevs_list": [ 00:10:53.615 { 00:10:53.615 "name": "BaseBdev1", 00:10:53.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.615 "is_configured": false, 00:10:53.615 "data_offset": 0, 00:10:53.615 "data_size": 0 00:10:53.615 }, 00:10:53.615 { 00:10:53.615 "name": null, 00:10:53.615 "uuid": "10f45881-79ad-40ac-b42d-0a241f30617a", 00:10:53.615 "is_configured": false, 00:10:53.615 "data_offset": 0, 00:10:53.615 "data_size": 65536 00:10:53.615 }, 00:10:53.615 { 00:10:53.615 "name": "BaseBdev3", 00:10:53.615 "uuid": "e072b494-04b4-4717-ad4f-3d842c7663b4", 00:10:53.615 "is_configured": true, 00:10:53.615 "data_offset": 0, 00:10:53.615 "data_size": 65536 00:10:53.615 } 00:10:53.615 ] 00:10:53.615 }' 00:10:53.615 16:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.615 16:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.201 16:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.201 16:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:54.201 16:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.201 16:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.201 16:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.201 16:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:54.201 16:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:54.201 16:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.201 16:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.201 [2024-12-06 16:26:35.887168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:54.201 BaseBdev1 00:10:54.201 16:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.201 16:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:54.201 16:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:54.201 16:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:54.201 16:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:54.201 16:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:54.201 16:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:54.201 16:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:54.201 16:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.201 16:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.201 16:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.201 16:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:54.201 16:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.201 16:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.201 [ 00:10:54.201 { 00:10:54.201 "name": "BaseBdev1", 00:10:54.201 "aliases": [ 00:10:54.201 "be67a86f-bd29-4478-98a7-b65730f5efdf" 00:10:54.201 ], 00:10:54.201 "product_name": "Malloc disk", 00:10:54.201 "block_size": 512, 00:10:54.201 "num_blocks": 65536, 00:10:54.201 "uuid": "be67a86f-bd29-4478-98a7-b65730f5efdf", 00:10:54.201 "assigned_rate_limits": { 00:10:54.201 "rw_ios_per_sec": 0, 00:10:54.201 "rw_mbytes_per_sec": 0, 00:10:54.201 "r_mbytes_per_sec": 0, 00:10:54.201 "w_mbytes_per_sec": 0 00:10:54.201 }, 00:10:54.201 "claimed": true, 00:10:54.201 "claim_type": "exclusive_write", 00:10:54.201 "zoned": false, 00:10:54.201 "supported_io_types": { 00:10:54.201 "read": true, 00:10:54.201 "write": true, 00:10:54.201 "unmap": true, 00:10:54.201 "flush": true, 00:10:54.201 "reset": true, 00:10:54.201 "nvme_admin": false, 00:10:54.201 "nvme_io": false, 00:10:54.201 "nvme_io_md": false, 00:10:54.201 "write_zeroes": true, 00:10:54.201 "zcopy": true, 00:10:54.201 "get_zone_info": false, 00:10:54.201 "zone_management": false, 00:10:54.202 "zone_append": false, 00:10:54.202 "compare": false, 00:10:54.202 "compare_and_write": false, 00:10:54.202 "abort": true, 00:10:54.202 "seek_hole": false, 00:10:54.202 "seek_data": false, 00:10:54.202 "copy": true, 00:10:54.202 "nvme_iov_md": false 00:10:54.202 }, 00:10:54.202 "memory_domains": [ 00:10:54.202 { 00:10:54.202 "dma_device_id": "system", 00:10:54.202 "dma_device_type": 1 00:10:54.202 }, 00:10:54.202 { 00:10:54.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.202 "dma_device_type": 2 00:10:54.202 } 00:10:54.202 ], 00:10:54.202 "driver_specific": {} 00:10:54.202 } 00:10:54.202 ] 00:10:54.202 16:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.202 16:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:54.202 16:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:54.202 16:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.202 16:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.202 16:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:54.202 16:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.202 16:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:54.202 16:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.202 16:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.202 16:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.202 16:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.202 16:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.202 16:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.202 16:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.202 16:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.202 16:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.202 16:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.202 "name": "Existed_Raid", 00:10:54.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.202 "strip_size_kb": 64, 00:10:54.202 "state": "configuring", 00:10:54.202 "raid_level": "raid0", 00:10:54.202 "superblock": false, 00:10:54.202 "num_base_bdevs": 3, 00:10:54.202 "num_base_bdevs_discovered": 2, 00:10:54.202 "num_base_bdevs_operational": 3, 00:10:54.202 "base_bdevs_list": [ 00:10:54.202 { 00:10:54.202 "name": "BaseBdev1", 00:10:54.202 "uuid": "be67a86f-bd29-4478-98a7-b65730f5efdf", 00:10:54.202 "is_configured": true, 00:10:54.202 "data_offset": 0, 00:10:54.202 "data_size": 65536 00:10:54.202 }, 00:10:54.202 { 00:10:54.202 "name": null, 00:10:54.202 "uuid": "10f45881-79ad-40ac-b42d-0a241f30617a", 00:10:54.202 "is_configured": false, 00:10:54.202 "data_offset": 0, 00:10:54.202 "data_size": 65536 00:10:54.202 }, 00:10:54.202 { 00:10:54.202 "name": "BaseBdev3", 00:10:54.202 "uuid": "e072b494-04b4-4717-ad4f-3d842c7663b4", 00:10:54.202 "is_configured": true, 00:10:54.202 "data_offset": 0, 00:10:54.202 "data_size": 65536 00:10:54.202 } 00:10:54.202 ] 00:10:54.202 }' 00:10:54.202 16:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.202 16:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.768 16:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:54.768 16:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.768 16:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.768 16:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.768 16:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.768 16:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:54.768 16:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:54.768 16:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.768 16:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.768 [2024-12-06 16:26:36.442436] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:54.768 16:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.768 16:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:54.768 16:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.768 16:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.768 16:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:54.768 16:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.768 16:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:54.769 16:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.769 16:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.769 16:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.769 16:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.769 16:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.769 16:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.769 16:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.769 16:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.769 16:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.769 16:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.769 "name": "Existed_Raid", 00:10:54.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.769 "strip_size_kb": 64, 00:10:54.769 "state": "configuring", 00:10:54.769 "raid_level": "raid0", 00:10:54.769 "superblock": false, 00:10:54.769 "num_base_bdevs": 3, 00:10:54.769 "num_base_bdevs_discovered": 1, 00:10:54.769 "num_base_bdevs_operational": 3, 00:10:54.769 "base_bdevs_list": [ 00:10:54.769 { 00:10:54.769 "name": "BaseBdev1", 00:10:54.769 "uuid": "be67a86f-bd29-4478-98a7-b65730f5efdf", 00:10:54.769 "is_configured": true, 00:10:54.769 "data_offset": 0, 00:10:54.769 "data_size": 65536 00:10:54.769 }, 00:10:54.769 { 00:10:54.769 "name": null, 00:10:54.769 "uuid": "10f45881-79ad-40ac-b42d-0a241f30617a", 00:10:54.769 "is_configured": false, 00:10:54.769 "data_offset": 0, 00:10:54.769 "data_size": 65536 00:10:54.769 }, 00:10:54.769 { 00:10:54.769 "name": null, 00:10:54.769 "uuid": "e072b494-04b4-4717-ad4f-3d842c7663b4", 00:10:54.769 "is_configured": false, 00:10:54.769 "data_offset": 0, 00:10:54.769 "data_size": 65536 00:10:54.769 } 00:10:54.769 ] 00:10:54.769 }' 00:10:54.769 16:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.769 16:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.337 16:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:55.337 16:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.337 16:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.337 16:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.337 16:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.337 16:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:55.337 16:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:55.337 16:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.337 16:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.337 [2024-12-06 16:26:36.897653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:55.337 16:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.337 16:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:55.337 16:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.337 16:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.337 16:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:55.337 16:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.337 16:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:55.337 16:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.337 16:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.337 16:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.337 16:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.337 16:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.337 16:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.337 16:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.337 16:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.337 16:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.337 16:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.337 "name": "Existed_Raid", 00:10:55.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.337 "strip_size_kb": 64, 00:10:55.337 "state": "configuring", 00:10:55.337 "raid_level": "raid0", 00:10:55.337 "superblock": false, 00:10:55.337 "num_base_bdevs": 3, 00:10:55.337 "num_base_bdevs_discovered": 2, 00:10:55.337 "num_base_bdevs_operational": 3, 00:10:55.337 "base_bdevs_list": [ 00:10:55.337 { 00:10:55.337 "name": "BaseBdev1", 00:10:55.337 "uuid": "be67a86f-bd29-4478-98a7-b65730f5efdf", 00:10:55.337 "is_configured": true, 00:10:55.337 "data_offset": 0, 00:10:55.337 "data_size": 65536 00:10:55.337 }, 00:10:55.337 { 00:10:55.337 "name": null, 00:10:55.337 "uuid": "10f45881-79ad-40ac-b42d-0a241f30617a", 00:10:55.337 "is_configured": false, 00:10:55.337 "data_offset": 0, 00:10:55.337 "data_size": 65536 00:10:55.337 }, 00:10:55.337 { 00:10:55.337 "name": "BaseBdev3", 00:10:55.337 "uuid": "e072b494-04b4-4717-ad4f-3d842c7663b4", 00:10:55.337 "is_configured": true, 00:10:55.337 "data_offset": 0, 00:10:55.337 "data_size": 65536 00:10:55.337 } 00:10:55.337 ] 00:10:55.337 }' 00:10:55.337 16:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.337 16:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.596 16:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:55.596 16:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.597 16:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.597 16:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.597 16:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.597 16:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:55.597 16:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:55.597 16:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.597 16:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.597 [2024-12-06 16:26:37.388915] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:55.597 16:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.597 16:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:55.597 16:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.597 16:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.597 16:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:55.597 16:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.597 16:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:55.597 16:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.597 16:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.597 16:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.597 16:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.597 16:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.597 16:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.597 16:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.597 16:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.855 16:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.855 16:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.855 "name": "Existed_Raid", 00:10:55.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.855 "strip_size_kb": 64, 00:10:55.855 "state": "configuring", 00:10:55.855 "raid_level": "raid0", 00:10:55.855 "superblock": false, 00:10:55.855 "num_base_bdevs": 3, 00:10:55.855 "num_base_bdevs_discovered": 1, 00:10:55.855 "num_base_bdevs_operational": 3, 00:10:55.855 "base_bdevs_list": [ 00:10:55.855 { 00:10:55.855 "name": null, 00:10:55.855 "uuid": "be67a86f-bd29-4478-98a7-b65730f5efdf", 00:10:55.855 "is_configured": false, 00:10:55.856 "data_offset": 0, 00:10:55.856 "data_size": 65536 00:10:55.856 }, 00:10:55.856 { 00:10:55.856 "name": null, 00:10:55.856 "uuid": "10f45881-79ad-40ac-b42d-0a241f30617a", 00:10:55.856 "is_configured": false, 00:10:55.856 "data_offset": 0, 00:10:55.856 "data_size": 65536 00:10:55.856 }, 00:10:55.856 { 00:10:55.856 "name": "BaseBdev3", 00:10:55.856 "uuid": "e072b494-04b4-4717-ad4f-3d842c7663b4", 00:10:55.856 "is_configured": true, 00:10:55.856 "data_offset": 0, 00:10:55.856 "data_size": 65536 00:10:55.856 } 00:10:55.856 ] 00:10:55.856 }' 00:10:55.856 16:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.856 16:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.116 16:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.116 16:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.116 16:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.116 16:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:56.116 16:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.116 16:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:56.116 16:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:56.116 16:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.116 16:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.116 [2024-12-06 16:26:37.888256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:56.116 16:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.116 16:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:56.116 16:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.116 16:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.116 16:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:56.116 16:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.116 16:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:56.116 16:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.116 16:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.116 16:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.116 16:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.116 16:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.116 16:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.116 16:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.116 16:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.116 16:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.116 16:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.116 "name": "Existed_Raid", 00:10:56.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.116 "strip_size_kb": 64, 00:10:56.116 "state": "configuring", 00:10:56.116 "raid_level": "raid0", 00:10:56.116 "superblock": false, 00:10:56.116 "num_base_bdevs": 3, 00:10:56.116 "num_base_bdevs_discovered": 2, 00:10:56.116 "num_base_bdevs_operational": 3, 00:10:56.116 "base_bdevs_list": [ 00:10:56.116 { 00:10:56.116 "name": null, 00:10:56.116 "uuid": "be67a86f-bd29-4478-98a7-b65730f5efdf", 00:10:56.116 "is_configured": false, 00:10:56.116 "data_offset": 0, 00:10:56.116 "data_size": 65536 00:10:56.116 }, 00:10:56.116 { 00:10:56.116 "name": "BaseBdev2", 00:10:56.116 "uuid": "10f45881-79ad-40ac-b42d-0a241f30617a", 00:10:56.116 "is_configured": true, 00:10:56.116 "data_offset": 0, 00:10:56.116 "data_size": 65536 00:10:56.116 }, 00:10:56.116 { 00:10:56.116 "name": "BaseBdev3", 00:10:56.116 "uuid": "e072b494-04b4-4717-ad4f-3d842c7663b4", 00:10:56.116 "is_configured": true, 00:10:56.116 "data_offset": 0, 00:10:56.116 "data_size": 65536 00:10:56.116 } 00:10:56.116 ] 00:10:56.116 }' 00:10:56.116 16:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.116 16:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.683 16:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.683 16:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.683 16:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.683 16:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:56.683 16:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.683 16:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:56.683 16:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:56.683 16:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.683 16:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.683 16:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.683 16:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.683 16:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u be67a86f-bd29-4478-98a7-b65730f5efdf 00:10:56.683 16:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.683 16:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.683 [2024-12-06 16:26:38.448513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:56.683 [2024-12-06 16:26:38.448575] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:56.683 [2024-12-06 16:26:38.448589] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:56.683 [2024-12-06 16:26:38.448925] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:56.683 [2024-12-06 16:26:38.449110] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:56.683 [2024-12-06 16:26:38.449121] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:10:56.683 [2024-12-06 16:26:38.449422] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:56.683 NewBaseBdev 00:10:56.683 16:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.683 16:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:56.683 16:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:56.683 16:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:56.683 16:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:56.683 16:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:56.683 16:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:56.683 16:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:56.683 16:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.683 16:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.683 16:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.683 16:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:56.683 16:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.683 16:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.683 [ 00:10:56.683 { 00:10:56.683 "name": "NewBaseBdev", 00:10:56.683 "aliases": [ 00:10:56.683 "be67a86f-bd29-4478-98a7-b65730f5efdf" 00:10:56.683 ], 00:10:56.683 "product_name": "Malloc disk", 00:10:56.683 "block_size": 512, 00:10:56.683 "num_blocks": 65536, 00:10:56.683 "uuid": "be67a86f-bd29-4478-98a7-b65730f5efdf", 00:10:56.683 "assigned_rate_limits": { 00:10:56.683 "rw_ios_per_sec": 0, 00:10:56.683 "rw_mbytes_per_sec": 0, 00:10:56.683 "r_mbytes_per_sec": 0, 00:10:56.683 "w_mbytes_per_sec": 0 00:10:56.683 }, 00:10:56.683 "claimed": true, 00:10:56.683 "claim_type": "exclusive_write", 00:10:56.683 "zoned": false, 00:10:56.683 "supported_io_types": { 00:10:56.683 "read": true, 00:10:56.683 "write": true, 00:10:56.683 "unmap": true, 00:10:56.683 "flush": true, 00:10:56.683 "reset": true, 00:10:56.683 "nvme_admin": false, 00:10:56.683 "nvme_io": false, 00:10:56.683 "nvme_io_md": false, 00:10:56.683 "write_zeroes": true, 00:10:56.683 "zcopy": true, 00:10:56.683 "get_zone_info": false, 00:10:56.684 "zone_management": false, 00:10:56.684 "zone_append": false, 00:10:56.684 "compare": false, 00:10:56.684 "compare_and_write": false, 00:10:56.684 "abort": true, 00:10:56.684 "seek_hole": false, 00:10:56.684 "seek_data": false, 00:10:56.684 "copy": true, 00:10:56.684 "nvme_iov_md": false 00:10:56.684 }, 00:10:56.684 "memory_domains": [ 00:10:56.684 { 00:10:56.684 "dma_device_id": "system", 00:10:56.684 "dma_device_type": 1 00:10:56.684 }, 00:10:56.684 { 00:10:56.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.684 "dma_device_type": 2 00:10:56.684 } 00:10:56.684 ], 00:10:56.684 "driver_specific": {} 00:10:56.684 } 00:10:56.684 ] 00:10:56.684 16:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.684 16:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:56.684 16:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:56.684 16:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.684 16:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:56.684 16:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:56.684 16:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.684 16:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:56.684 16:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.684 16:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.684 16:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.684 16:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.684 16:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.684 16:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.684 16:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.684 16:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.684 16:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.942 16:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.942 "name": "Existed_Raid", 00:10:56.942 "uuid": "1e48650e-9ecc-4e53-a858-e3b2ae0ee6cd", 00:10:56.942 "strip_size_kb": 64, 00:10:56.942 "state": "online", 00:10:56.942 "raid_level": "raid0", 00:10:56.942 "superblock": false, 00:10:56.942 "num_base_bdevs": 3, 00:10:56.942 "num_base_bdevs_discovered": 3, 00:10:56.942 "num_base_bdevs_operational": 3, 00:10:56.942 "base_bdevs_list": [ 00:10:56.942 { 00:10:56.942 "name": "NewBaseBdev", 00:10:56.942 "uuid": "be67a86f-bd29-4478-98a7-b65730f5efdf", 00:10:56.942 "is_configured": true, 00:10:56.942 "data_offset": 0, 00:10:56.942 "data_size": 65536 00:10:56.942 }, 00:10:56.942 { 00:10:56.942 "name": "BaseBdev2", 00:10:56.942 "uuid": "10f45881-79ad-40ac-b42d-0a241f30617a", 00:10:56.942 "is_configured": true, 00:10:56.942 "data_offset": 0, 00:10:56.942 "data_size": 65536 00:10:56.942 }, 00:10:56.942 { 00:10:56.942 "name": "BaseBdev3", 00:10:56.942 "uuid": "e072b494-04b4-4717-ad4f-3d842c7663b4", 00:10:56.942 "is_configured": true, 00:10:56.942 "data_offset": 0, 00:10:56.942 "data_size": 65536 00:10:56.942 } 00:10:56.942 ] 00:10:56.942 }' 00:10:56.942 16:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.942 16:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.201 16:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:57.201 16:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:57.201 16:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:57.201 16:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:57.201 16:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:57.201 16:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:57.201 16:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:57.201 16:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.201 16:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.201 16:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:57.201 [2024-12-06 16:26:38.840317] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:57.201 16:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.201 16:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:57.201 "name": "Existed_Raid", 00:10:57.201 "aliases": [ 00:10:57.201 "1e48650e-9ecc-4e53-a858-e3b2ae0ee6cd" 00:10:57.201 ], 00:10:57.201 "product_name": "Raid Volume", 00:10:57.201 "block_size": 512, 00:10:57.201 "num_blocks": 196608, 00:10:57.201 "uuid": "1e48650e-9ecc-4e53-a858-e3b2ae0ee6cd", 00:10:57.201 "assigned_rate_limits": { 00:10:57.201 "rw_ios_per_sec": 0, 00:10:57.201 "rw_mbytes_per_sec": 0, 00:10:57.201 "r_mbytes_per_sec": 0, 00:10:57.201 "w_mbytes_per_sec": 0 00:10:57.201 }, 00:10:57.201 "claimed": false, 00:10:57.201 "zoned": false, 00:10:57.201 "supported_io_types": { 00:10:57.201 "read": true, 00:10:57.201 "write": true, 00:10:57.201 "unmap": true, 00:10:57.201 "flush": true, 00:10:57.201 "reset": true, 00:10:57.201 "nvme_admin": false, 00:10:57.201 "nvme_io": false, 00:10:57.201 "nvme_io_md": false, 00:10:57.201 "write_zeroes": true, 00:10:57.201 "zcopy": false, 00:10:57.201 "get_zone_info": false, 00:10:57.201 "zone_management": false, 00:10:57.201 "zone_append": false, 00:10:57.201 "compare": false, 00:10:57.201 "compare_and_write": false, 00:10:57.201 "abort": false, 00:10:57.201 "seek_hole": false, 00:10:57.201 "seek_data": false, 00:10:57.201 "copy": false, 00:10:57.201 "nvme_iov_md": false 00:10:57.201 }, 00:10:57.201 "memory_domains": [ 00:10:57.201 { 00:10:57.201 "dma_device_id": "system", 00:10:57.201 "dma_device_type": 1 00:10:57.201 }, 00:10:57.201 { 00:10:57.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.201 "dma_device_type": 2 00:10:57.201 }, 00:10:57.201 { 00:10:57.201 "dma_device_id": "system", 00:10:57.201 "dma_device_type": 1 00:10:57.201 }, 00:10:57.201 { 00:10:57.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.201 "dma_device_type": 2 00:10:57.201 }, 00:10:57.201 { 00:10:57.201 "dma_device_id": "system", 00:10:57.201 "dma_device_type": 1 00:10:57.201 }, 00:10:57.201 { 00:10:57.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.201 "dma_device_type": 2 00:10:57.201 } 00:10:57.201 ], 00:10:57.201 "driver_specific": { 00:10:57.201 "raid": { 00:10:57.201 "uuid": "1e48650e-9ecc-4e53-a858-e3b2ae0ee6cd", 00:10:57.201 "strip_size_kb": 64, 00:10:57.201 "state": "online", 00:10:57.201 "raid_level": "raid0", 00:10:57.201 "superblock": false, 00:10:57.201 "num_base_bdevs": 3, 00:10:57.201 "num_base_bdevs_discovered": 3, 00:10:57.201 "num_base_bdevs_operational": 3, 00:10:57.201 "base_bdevs_list": [ 00:10:57.201 { 00:10:57.201 "name": "NewBaseBdev", 00:10:57.201 "uuid": "be67a86f-bd29-4478-98a7-b65730f5efdf", 00:10:57.201 "is_configured": true, 00:10:57.201 "data_offset": 0, 00:10:57.201 "data_size": 65536 00:10:57.201 }, 00:10:57.201 { 00:10:57.201 "name": "BaseBdev2", 00:10:57.201 "uuid": "10f45881-79ad-40ac-b42d-0a241f30617a", 00:10:57.201 "is_configured": true, 00:10:57.201 "data_offset": 0, 00:10:57.201 "data_size": 65536 00:10:57.201 }, 00:10:57.201 { 00:10:57.201 "name": "BaseBdev3", 00:10:57.201 "uuid": "e072b494-04b4-4717-ad4f-3d842c7663b4", 00:10:57.201 "is_configured": true, 00:10:57.201 "data_offset": 0, 00:10:57.201 "data_size": 65536 00:10:57.201 } 00:10:57.201 ] 00:10:57.201 } 00:10:57.201 } 00:10:57.201 }' 00:10:57.201 16:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:57.201 16:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:57.201 BaseBdev2 00:10:57.201 BaseBdev3' 00:10:57.201 16:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.201 16:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:57.201 16:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.201 16:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:57.201 16:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.201 16:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.201 16:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.201 16:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.201 16:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.202 16:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.202 16:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.202 16:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:57.202 16:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.202 16:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.202 16:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.202 16:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.202 16:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.202 16:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.202 16:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.202 16:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:57.202 16:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.202 16:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.202 16:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.202 16:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.461 16:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.461 16:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.461 16:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:57.461 16:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.461 16:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.461 [2024-12-06 16:26:39.063534] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:57.461 [2024-12-06 16:26:39.063593] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:57.461 [2024-12-06 16:26:39.063713] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:57.461 [2024-12-06 16:26:39.063787] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:57.461 [2024-12-06 16:26:39.063814] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:10:57.461 16:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.461 16:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 75405 00:10:57.461 16:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 75405 ']' 00:10:57.461 16:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 75405 00:10:57.461 16:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:57.461 16:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:57.461 16:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75405 00:10:57.461 killing process with pid 75405 00:10:57.461 16:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:57.461 16:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:57.461 16:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75405' 00:10:57.461 16:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 75405 00:10:57.461 [2024-12-06 16:26:39.110356] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:57.461 16:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 75405 00:10:57.461 [2024-12-06 16:26:39.173033] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:57.720 16:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:57.720 00:10:57.720 real 0m9.074s 00:10:57.720 user 0m15.290s 00:10:57.720 sys 0m1.854s 00:10:57.720 ************************************ 00:10:57.720 END TEST raid_state_function_test 00:10:57.720 ************************************ 00:10:57.720 16:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:57.720 16:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.979 16:26:39 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:10:57.979 16:26:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:57.979 16:26:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:57.979 16:26:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:57.979 ************************************ 00:10:57.980 START TEST raid_state_function_test_sb 00:10:57.980 ************************************ 00:10:57.980 16:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:10:57.980 16:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:57.980 16:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:57.980 16:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:57.980 16:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:57.980 16:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:57.980 16:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:57.980 16:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:57.980 16:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:57.980 16:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:57.980 16:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:57.980 16:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:57.980 16:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:57.980 16:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:57.980 16:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:57.980 16:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:57.980 16:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:57.980 16:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:57.980 16:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:57.980 16:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:57.980 16:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:57.980 16:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:57.980 16:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:57.980 16:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:57.980 16:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:57.980 16:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:57.980 16:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:57.980 16:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=76012 00:10:57.980 16:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:57.980 Process raid pid: 76012 00:10:57.980 16:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 76012' 00:10:57.980 16:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 76012 00:10:57.980 16:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 76012 ']' 00:10:57.980 16:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.980 16:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:57.980 16:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.980 16:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:57.980 16:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.980 [2024-12-06 16:26:39.686890] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:10:57.980 [2024-12-06 16:26:39.687029] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:58.239 [2024-12-06 16:26:39.861314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.239 [2024-12-06 16:26:39.905249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.239 [2024-12-06 16:26:39.985867] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:58.239 [2024-12-06 16:26:39.986038] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:58.812 16:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:58.812 16:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:58.812 16:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:58.812 16:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.812 16:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.812 [2024-12-06 16:26:40.538554] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:58.812 [2024-12-06 16:26:40.538655] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:58.812 [2024-12-06 16:26:40.538674] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:58.812 [2024-12-06 16:26:40.538687] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:58.812 [2024-12-06 16:26:40.538696] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:58.812 [2024-12-06 16:26:40.538711] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:58.812 16:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.812 16:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:58.812 16:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.812 16:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.812 16:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:58.812 16:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.812 16:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:58.812 16:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.812 16:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.812 16:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.812 16:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.812 16:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.812 16:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.812 16:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.812 16:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.812 16:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.812 16:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.812 "name": "Existed_Raid", 00:10:58.812 "uuid": "c223058a-411c-45d4-b704-cb0e3e8fb2af", 00:10:58.812 "strip_size_kb": 64, 00:10:58.812 "state": "configuring", 00:10:58.812 "raid_level": "raid0", 00:10:58.812 "superblock": true, 00:10:58.812 "num_base_bdevs": 3, 00:10:58.812 "num_base_bdevs_discovered": 0, 00:10:58.812 "num_base_bdevs_operational": 3, 00:10:58.812 "base_bdevs_list": [ 00:10:58.812 { 00:10:58.812 "name": "BaseBdev1", 00:10:58.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.812 "is_configured": false, 00:10:58.812 "data_offset": 0, 00:10:58.812 "data_size": 0 00:10:58.812 }, 00:10:58.812 { 00:10:58.812 "name": "BaseBdev2", 00:10:58.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.812 "is_configured": false, 00:10:58.812 "data_offset": 0, 00:10:58.812 "data_size": 0 00:10:58.812 }, 00:10:58.812 { 00:10:58.812 "name": "BaseBdev3", 00:10:58.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.812 "is_configured": false, 00:10:58.812 "data_offset": 0, 00:10:58.812 "data_size": 0 00:10:58.812 } 00:10:58.812 ] 00:10:58.812 }' 00:10:58.812 16:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.812 16:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.379 16:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:59.379 16:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.379 16:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.379 [2024-12-06 16:26:41.017897] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:59.379 [2024-12-06 16:26:41.018070] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:10:59.379 16:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.379 16:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:59.379 16:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.379 16:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.379 [2024-12-06 16:26:41.025887] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:59.379 [2024-12-06 16:26:41.026012] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:59.379 [2024-12-06 16:26:41.026045] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:59.379 [2024-12-06 16:26:41.026073] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:59.379 [2024-12-06 16:26:41.026094] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:59.379 [2024-12-06 16:26:41.026119] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:59.379 16:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.379 16:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:59.379 16:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.379 16:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.379 [2024-12-06 16:26:41.049194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:59.379 BaseBdev1 00:10:59.379 16:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.379 16:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:59.379 16:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:59.379 16:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:59.379 16:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:59.380 16:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:59.380 16:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:59.380 16:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:59.380 16:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.380 16:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.380 16:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.380 16:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:59.380 16:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.380 16:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.380 [ 00:10:59.380 { 00:10:59.380 "name": "BaseBdev1", 00:10:59.380 "aliases": [ 00:10:59.380 "1b30a886-e9e8-49db-9dc2-89463fad56e7" 00:10:59.380 ], 00:10:59.380 "product_name": "Malloc disk", 00:10:59.380 "block_size": 512, 00:10:59.380 "num_blocks": 65536, 00:10:59.380 "uuid": "1b30a886-e9e8-49db-9dc2-89463fad56e7", 00:10:59.380 "assigned_rate_limits": { 00:10:59.380 "rw_ios_per_sec": 0, 00:10:59.380 "rw_mbytes_per_sec": 0, 00:10:59.380 "r_mbytes_per_sec": 0, 00:10:59.380 "w_mbytes_per_sec": 0 00:10:59.380 }, 00:10:59.380 "claimed": true, 00:10:59.380 "claim_type": "exclusive_write", 00:10:59.380 "zoned": false, 00:10:59.380 "supported_io_types": { 00:10:59.380 "read": true, 00:10:59.380 "write": true, 00:10:59.380 "unmap": true, 00:10:59.380 "flush": true, 00:10:59.380 "reset": true, 00:10:59.380 "nvme_admin": false, 00:10:59.380 "nvme_io": false, 00:10:59.380 "nvme_io_md": false, 00:10:59.380 "write_zeroes": true, 00:10:59.380 "zcopy": true, 00:10:59.380 "get_zone_info": false, 00:10:59.380 "zone_management": false, 00:10:59.380 "zone_append": false, 00:10:59.380 "compare": false, 00:10:59.380 "compare_and_write": false, 00:10:59.380 "abort": true, 00:10:59.380 "seek_hole": false, 00:10:59.380 "seek_data": false, 00:10:59.380 "copy": true, 00:10:59.380 "nvme_iov_md": false 00:10:59.380 }, 00:10:59.380 "memory_domains": [ 00:10:59.380 { 00:10:59.380 "dma_device_id": "system", 00:10:59.380 "dma_device_type": 1 00:10:59.380 }, 00:10:59.380 { 00:10:59.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.380 "dma_device_type": 2 00:10:59.380 } 00:10:59.380 ], 00:10:59.380 "driver_specific": {} 00:10:59.380 } 00:10:59.380 ] 00:10:59.380 16:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.380 16:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:59.380 16:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:59.380 16:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.380 16:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.380 16:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:59.380 16:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.380 16:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:59.380 16:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.380 16:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.380 16:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.380 16:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.380 16:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.380 16:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.380 16:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.380 16:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.380 16:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.380 16:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.380 "name": "Existed_Raid", 00:10:59.380 "uuid": "08f73343-a4a8-44cb-80ea-ba2e97f8c427", 00:10:59.380 "strip_size_kb": 64, 00:10:59.380 "state": "configuring", 00:10:59.380 "raid_level": "raid0", 00:10:59.380 "superblock": true, 00:10:59.380 "num_base_bdevs": 3, 00:10:59.380 "num_base_bdevs_discovered": 1, 00:10:59.380 "num_base_bdevs_operational": 3, 00:10:59.380 "base_bdevs_list": [ 00:10:59.380 { 00:10:59.380 "name": "BaseBdev1", 00:10:59.380 "uuid": "1b30a886-e9e8-49db-9dc2-89463fad56e7", 00:10:59.380 "is_configured": true, 00:10:59.380 "data_offset": 2048, 00:10:59.380 "data_size": 63488 00:10:59.380 }, 00:10:59.380 { 00:10:59.380 "name": "BaseBdev2", 00:10:59.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.380 "is_configured": false, 00:10:59.380 "data_offset": 0, 00:10:59.380 "data_size": 0 00:10:59.380 }, 00:10:59.380 { 00:10:59.380 "name": "BaseBdev3", 00:10:59.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.380 "is_configured": false, 00:10:59.380 "data_offset": 0, 00:10:59.380 "data_size": 0 00:10:59.380 } 00:10:59.380 ] 00:10:59.380 }' 00:10:59.380 16:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.380 16:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.947 16:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:59.947 16:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.947 16:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.947 [2024-12-06 16:26:41.516453] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:59.947 [2024-12-06 16:26:41.516526] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:10:59.947 16:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.947 16:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:59.947 16:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.947 16:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.947 [2024-12-06 16:26:41.528482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:59.947 [2024-12-06 16:26:41.530600] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:59.947 [2024-12-06 16:26:41.530685] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:59.947 [2024-12-06 16:26:41.530717] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:59.947 [2024-12-06 16:26:41.530742] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:59.947 16:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.947 16:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:59.947 16:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:59.947 16:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:59.947 16:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.947 16:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.947 16:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:59.947 16:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.947 16:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:59.947 16:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.947 16:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.947 16:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.947 16:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.947 16:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.947 16:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.947 16:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.947 16:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.947 16:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.947 16:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.947 "name": "Existed_Raid", 00:10:59.947 "uuid": "f983dedb-1c4d-4be2-b3db-d67465d9dcbd", 00:10:59.947 "strip_size_kb": 64, 00:10:59.947 "state": "configuring", 00:10:59.947 "raid_level": "raid0", 00:10:59.947 "superblock": true, 00:10:59.947 "num_base_bdevs": 3, 00:10:59.947 "num_base_bdevs_discovered": 1, 00:10:59.947 "num_base_bdevs_operational": 3, 00:10:59.947 "base_bdevs_list": [ 00:10:59.947 { 00:10:59.947 "name": "BaseBdev1", 00:10:59.947 "uuid": "1b30a886-e9e8-49db-9dc2-89463fad56e7", 00:10:59.947 "is_configured": true, 00:10:59.947 "data_offset": 2048, 00:10:59.947 "data_size": 63488 00:10:59.947 }, 00:10:59.947 { 00:10:59.947 "name": "BaseBdev2", 00:10:59.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.947 "is_configured": false, 00:10:59.947 "data_offset": 0, 00:10:59.947 "data_size": 0 00:10:59.947 }, 00:10:59.947 { 00:10:59.947 "name": "BaseBdev3", 00:10:59.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.947 "is_configured": false, 00:10:59.947 "data_offset": 0, 00:10:59.947 "data_size": 0 00:10:59.947 } 00:10:59.947 ] 00:10:59.947 }' 00:10:59.947 16:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.947 16:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.207 16:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:00.207 16:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.207 16:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.207 [2024-12-06 16:26:42.031035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:00.207 BaseBdev2 00:11:00.207 16:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.207 16:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:00.207 16:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:00.207 16:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:00.207 16:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:00.207 16:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:00.207 16:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:00.207 16:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:00.207 16:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.207 16:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.208 16:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.208 16:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:00.208 16:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.208 16:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.467 [ 00:11:00.467 { 00:11:00.467 "name": "BaseBdev2", 00:11:00.467 "aliases": [ 00:11:00.467 "f0c31d3e-952c-4a52-9659-a6263a365e52" 00:11:00.467 ], 00:11:00.467 "product_name": "Malloc disk", 00:11:00.467 "block_size": 512, 00:11:00.467 "num_blocks": 65536, 00:11:00.467 "uuid": "f0c31d3e-952c-4a52-9659-a6263a365e52", 00:11:00.467 "assigned_rate_limits": { 00:11:00.467 "rw_ios_per_sec": 0, 00:11:00.467 "rw_mbytes_per_sec": 0, 00:11:00.467 "r_mbytes_per_sec": 0, 00:11:00.467 "w_mbytes_per_sec": 0 00:11:00.467 }, 00:11:00.467 "claimed": true, 00:11:00.467 "claim_type": "exclusive_write", 00:11:00.467 "zoned": false, 00:11:00.467 "supported_io_types": { 00:11:00.467 "read": true, 00:11:00.467 "write": true, 00:11:00.467 "unmap": true, 00:11:00.467 "flush": true, 00:11:00.467 "reset": true, 00:11:00.467 "nvme_admin": false, 00:11:00.467 "nvme_io": false, 00:11:00.467 "nvme_io_md": false, 00:11:00.467 "write_zeroes": true, 00:11:00.467 "zcopy": true, 00:11:00.467 "get_zone_info": false, 00:11:00.467 "zone_management": false, 00:11:00.467 "zone_append": false, 00:11:00.467 "compare": false, 00:11:00.467 "compare_and_write": false, 00:11:00.467 "abort": true, 00:11:00.467 "seek_hole": false, 00:11:00.467 "seek_data": false, 00:11:00.467 "copy": true, 00:11:00.467 "nvme_iov_md": false 00:11:00.467 }, 00:11:00.467 "memory_domains": [ 00:11:00.467 { 00:11:00.467 "dma_device_id": "system", 00:11:00.467 "dma_device_type": 1 00:11:00.467 }, 00:11:00.467 { 00:11:00.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.467 "dma_device_type": 2 00:11:00.467 } 00:11:00.467 ], 00:11:00.467 "driver_specific": {} 00:11:00.467 } 00:11:00.467 ] 00:11:00.467 16:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.467 16:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:00.467 16:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:00.467 16:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:00.467 16:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:00.467 16:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.467 16:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.467 16:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:00.467 16:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.467 16:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:00.467 16:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.467 16:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.467 16:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.467 16:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.467 16:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.467 16:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.467 16:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.467 16:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.467 16:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.467 16:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.467 "name": "Existed_Raid", 00:11:00.467 "uuid": "f983dedb-1c4d-4be2-b3db-d67465d9dcbd", 00:11:00.467 "strip_size_kb": 64, 00:11:00.467 "state": "configuring", 00:11:00.467 "raid_level": "raid0", 00:11:00.467 "superblock": true, 00:11:00.467 "num_base_bdevs": 3, 00:11:00.467 "num_base_bdevs_discovered": 2, 00:11:00.467 "num_base_bdevs_operational": 3, 00:11:00.467 "base_bdevs_list": [ 00:11:00.467 { 00:11:00.467 "name": "BaseBdev1", 00:11:00.467 "uuid": "1b30a886-e9e8-49db-9dc2-89463fad56e7", 00:11:00.467 "is_configured": true, 00:11:00.467 "data_offset": 2048, 00:11:00.467 "data_size": 63488 00:11:00.467 }, 00:11:00.467 { 00:11:00.467 "name": "BaseBdev2", 00:11:00.467 "uuid": "f0c31d3e-952c-4a52-9659-a6263a365e52", 00:11:00.467 "is_configured": true, 00:11:00.467 "data_offset": 2048, 00:11:00.467 "data_size": 63488 00:11:00.467 }, 00:11:00.467 { 00:11:00.467 "name": "BaseBdev3", 00:11:00.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.467 "is_configured": false, 00:11:00.467 "data_offset": 0, 00:11:00.467 "data_size": 0 00:11:00.467 } 00:11:00.467 ] 00:11:00.467 }' 00:11:00.467 16:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.467 16:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.726 16:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:00.726 16:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.726 16:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.726 [2024-12-06 16:26:42.457285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:00.726 [2024-12-06 16:26:42.457604] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:11:00.726 [2024-12-06 16:26:42.457669] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:00.726 [2024-12-06 16:26:42.458004] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:11:00.726 [2024-12-06 16:26:42.458221] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:11:00.726 [2024-12-06 16:26:42.458269] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:11:00.726 BaseBdev3 00:11:00.726 [2024-12-06 16:26:42.458485] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:00.726 16:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.726 16:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:00.726 16:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:00.726 16:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:00.726 16:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:00.726 16:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:00.726 16:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:00.726 16:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:00.726 16:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.727 16:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.727 16:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.727 16:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:00.727 16:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.727 16:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.727 [ 00:11:00.727 { 00:11:00.727 "name": "BaseBdev3", 00:11:00.727 "aliases": [ 00:11:00.727 "f7078410-f59a-430b-91fc-ff73c0daf1fd" 00:11:00.727 ], 00:11:00.727 "product_name": "Malloc disk", 00:11:00.727 "block_size": 512, 00:11:00.727 "num_blocks": 65536, 00:11:00.727 "uuid": "f7078410-f59a-430b-91fc-ff73c0daf1fd", 00:11:00.727 "assigned_rate_limits": { 00:11:00.727 "rw_ios_per_sec": 0, 00:11:00.727 "rw_mbytes_per_sec": 0, 00:11:00.727 "r_mbytes_per_sec": 0, 00:11:00.727 "w_mbytes_per_sec": 0 00:11:00.727 }, 00:11:00.727 "claimed": true, 00:11:00.727 "claim_type": "exclusive_write", 00:11:00.727 "zoned": false, 00:11:00.727 "supported_io_types": { 00:11:00.727 "read": true, 00:11:00.727 "write": true, 00:11:00.727 "unmap": true, 00:11:00.727 "flush": true, 00:11:00.727 "reset": true, 00:11:00.727 "nvme_admin": false, 00:11:00.727 "nvme_io": false, 00:11:00.727 "nvme_io_md": false, 00:11:00.727 "write_zeroes": true, 00:11:00.727 "zcopy": true, 00:11:00.727 "get_zone_info": false, 00:11:00.727 "zone_management": false, 00:11:00.727 "zone_append": false, 00:11:00.727 "compare": false, 00:11:00.727 "compare_and_write": false, 00:11:00.727 "abort": true, 00:11:00.727 "seek_hole": false, 00:11:00.727 "seek_data": false, 00:11:00.727 "copy": true, 00:11:00.727 "nvme_iov_md": false 00:11:00.727 }, 00:11:00.727 "memory_domains": [ 00:11:00.727 { 00:11:00.727 "dma_device_id": "system", 00:11:00.727 "dma_device_type": 1 00:11:00.727 }, 00:11:00.727 { 00:11:00.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.727 "dma_device_type": 2 00:11:00.727 } 00:11:00.727 ], 00:11:00.727 "driver_specific": {} 00:11:00.727 } 00:11:00.727 ] 00:11:00.727 16:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.727 16:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:00.727 16:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:00.727 16:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:00.727 16:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:00.727 16:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.727 16:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:00.727 16:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:00.727 16:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.727 16:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:00.727 16:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.727 16:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.727 16:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.727 16:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.727 16:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.727 16:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.727 16:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.727 16:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.727 16:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.727 16:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.727 "name": "Existed_Raid", 00:11:00.727 "uuid": "f983dedb-1c4d-4be2-b3db-d67465d9dcbd", 00:11:00.727 "strip_size_kb": 64, 00:11:00.727 "state": "online", 00:11:00.727 "raid_level": "raid0", 00:11:00.727 "superblock": true, 00:11:00.727 "num_base_bdevs": 3, 00:11:00.727 "num_base_bdevs_discovered": 3, 00:11:00.727 "num_base_bdevs_operational": 3, 00:11:00.727 "base_bdevs_list": [ 00:11:00.727 { 00:11:00.727 "name": "BaseBdev1", 00:11:00.727 "uuid": "1b30a886-e9e8-49db-9dc2-89463fad56e7", 00:11:00.727 "is_configured": true, 00:11:00.727 "data_offset": 2048, 00:11:00.727 "data_size": 63488 00:11:00.727 }, 00:11:00.727 { 00:11:00.727 "name": "BaseBdev2", 00:11:00.727 "uuid": "f0c31d3e-952c-4a52-9659-a6263a365e52", 00:11:00.727 "is_configured": true, 00:11:00.727 "data_offset": 2048, 00:11:00.727 "data_size": 63488 00:11:00.727 }, 00:11:00.727 { 00:11:00.727 "name": "BaseBdev3", 00:11:00.727 "uuid": "f7078410-f59a-430b-91fc-ff73c0daf1fd", 00:11:00.727 "is_configured": true, 00:11:00.727 "data_offset": 2048, 00:11:00.727 "data_size": 63488 00:11:00.727 } 00:11:00.727 ] 00:11:00.727 }' 00:11:00.727 16:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.727 16:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.295 16:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:01.295 16:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:01.295 16:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:01.295 16:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:01.295 16:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:01.295 16:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:01.295 16:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:01.295 16:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:01.295 16:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.295 16:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.295 [2024-12-06 16:26:42.980842] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:01.295 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.295 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:01.295 "name": "Existed_Raid", 00:11:01.295 "aliases": [ 00:11:01.295 "f983dedb-1c4d-4be2-b3db-d67465d9dcbd" 00:11:01.295 ], 00:11:01.295 "product_name": "Raid Volume", 00:11:01.295 "block_size": 512, 00:11:01.295 "num_blocks": 190464, 00:11:01.295 "uuid": "f983dedb-1c4d-4be2-b3db-d67465d9dcbd", 00:11:01.295 "assigned_rate_limits": { 00:11:01.295 "rw_ios_per_sec": 0, 00:11:01.295 "rw_mbytes_per_sec": 0, 00:11:01.295 "r_mbytes_per_sec": 0, 00:11:01.295 "w_mbytes_per_sec": 0 00:11:01.295 }, 00:11:01.295 "claimed": false, 00:11:01.295 "zoned": false, 00:11:01.295 "supported_io_types": { 00:11:01.295 "read": true, 00:11:01.295 "write": true, 00:11:01.295 "unmap": true, 00:11:01.295 "flush": true, 00:11:01.295 "reset": true, 00:11:01.295 "nvme_admin": false, 00:11:01.295 "nvme_io": false, 00:11:01.295 "nvme_io_md": false, 00:11:01.295 "write_zeroes": true, 00:11:01.295 "zcopy": false, 00:11:01.295 "get_zone_info": false, 00:11:01.295 "zone_management": false, 00:11:01.295 "zone_append": false, 00:11:01.295 "compare": false, 00:11:01.295 "compare_and_write": false, 00:11:01.295 "abort": false, 00:11:01.295 "seek_hole": false, 00:11:01.295 "seek_data": false, 00:11:01.295 "copy": false, 00:11:01.295 "nvme_iov_md": false 00:11:01.295 }, 00:11:01.295 "memory_domains": [ 00:11:01.295 { 00:11:01.295 "dma_device_id": "system", 00:11:01.295 "dma_device_type": 1 00:11:01.295 }, 00:11:01.295 { 00:11:01.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.295 "dma_device_type": 2 00:11:01.295 }, 00:11:01.295 { 00:11:01.295 "dma_device_id": "system", 00:11:01.295 "dma_device_type": 1 00:11:01.295 }, 00:11:01.295 { 00:11:01.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.295 "dma_device_type": 2 00:11:01.295 }, 00:11:01.295 { 00:11:01.295 "dma_device_id": "system", 00:11:01.295 "dma_device_type": 1 00:11:01.295 }, 00:11:01.295 { 00:11:01.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.295 "dma_device_type": 2 00:11:01.295 } 00:11:01.295 ], 00:11:01.295 "driver_specific": { 00:11:01.295 "raid": { 00:11:01.295 "uuid": "f983dedb-1c4d-4be2-b3db-d67465d9dcbd", 00:11:01.295 "strip_size_kb": 64, 00:11:01.295 "state": "online", 00:11:01.295 "raid_level": "raid0", 00:11:01.295 "superblock": true, 00:11:01.295 "num_base_bdevs": 3, 00:11:01.295 "num_base_bdevs_discovered": 3, 00:11:01.295 "num_base_bdevs_operational": 3, 00:11:01.295 "base_bdevs_list": [ 00:11:01.295 { 00:11:01.295 "name": "BaseBdev1", 00:11:01.295 "uuid": "1b30a886-e9e8-49db-9dc2-89463fad56e7", 00:11:01.295 "is_configured": true, 00:11:01.295 "data_offset": 2048, 00:11:01.295 "data_size": 63488 00:11:01.295 }, 00:11:01.295 { 00:11:01.295 "name": "BaseBdev2", 00:11:01.295 "uuid": "f0c31d3e-952c-4a52-9659-a6263a365e52", 00:11:01.295 "is_configured": true, 00:11:01.295 "data_offset": 2048, 00:11:01.295 "data_size": 63488 00:11:01.295 }, 00:11:01.295 { 00:11:01.295 "name": "BaseBdev3", 00:11:01.295 "uuid": "f7078410-f59a-430b-91fc-ff73c0daf1fd", 00:11:01.295 "is_configured": true, 00:11:01.295 "data_offset": 2048, 00:11:01.295 "data_size": 63488 00:11:01.295 } 00:11:01.295 ] 00:11:01.295 } 00:11:01.295 } 00:11:01.295 }' 00:11:01.295 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:01.295 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:01.295 BaseBdev2 00:11:01.295 BaseBdev3' 00:11:01.295 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.295 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:01.295 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.295 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:01.295 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.295 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.295 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.555 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.555 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.555 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.555 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.555 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:01.555 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.555 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.555 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.555 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.555 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.555 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.555 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.555 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.555 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:01.555 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.555 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.555 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.555 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.555 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.555 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:01.555 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.555 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.555 [2024-12-06 16:26:43.276010] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:01.555 [2024-12-06 16:26:43.276098] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:01.555 [2024-12-06 16:26:43.276212] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:01.555 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.555 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:01.555 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:01.555 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:01.555 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:01.555 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:01.555 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:11:01.555 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.555 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:01.555 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:01.555 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.555 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:01.555 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.555 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.555 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.555 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.555 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.555 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.555 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.555 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.555 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.555 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.555 "name": "Existed_Raid", 00:11:01.555 "uuid": "f983dedb-1c4d-4be2-b3db-d67465d9dcbd", 00:11:01.555 "strip_size_kb": 64, 00:11:01.555 "state": "offline", 00:11:01.555 "raid_level": "raid0", 00:11:01.555 "superblock": true, 00:11:01.555 "num_base_bdevs": 3, 00:11:01.555 "num_base_bdevs_discovered": 2, 00:11:01.555 "num_base_bdevs_operational": 2, 00:11:01.555 "base_bdevs_list": [ 00:11:01.555 { 00:11:01.555 "name": null, 00:11:01.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.555 "is_configured": false, 00:11:01.555 "data_offset": 0, 00:11:01.555 "data_size": 63488 00:11:01.555 }, 00:11:01.555 { 00:11:01.555 "name": "BaseBdev2", 00:11:01.555 "uuid": "f0c31d3e-952c-4a52-9659-a6263a365e52", 00:11:01.555 "is_configured": true, 00:11:01.555 "data_offset": 2048, 00:11:01.555 "data_size": 63488 00:11:01.555 }, 00:11:01.555 { 00:11:01.556 "name": "BaseBdev3", 00:11:01.556 "uuid": "f7078410-f59a-430b-91fc-ff73c0daf1fd", 00:11:01.556 "is_configured": true, 00:11:01.556 "data_offset": 2048, 00:11:01.556 "data_size": 63488 00:11:01.556 } 00:11:01.556 ] 00:11:01.556 }' 00:11:01.556 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.556 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.126 [2024-12-06 16:26:43.775550] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.126 [2024-12-06 16:26:43.843433] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:02.126 [2024-12-06 16:26:43.843556] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.126 BaseBdev2 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.126 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.126 [ 00:11:02.126 { 00:11:02.126 "name": "BaseBdev2", 00:11:02.126 "aliases": [ 00:11:02.126 "0d6f4ce1-b51c-4c9d-b5c9-30e1c8efc031" 00:11:02.126 ], 00:11:02.126 "product_name": "Malloc disk", 00:11:02.126 "block_size": 512, 00:11:02.126 "num_blocks": 65536, 00:11:02.126 "uuid": "0d6f4ce1-b51c-4c9d-b5c9-30e1c8efc031", 00:11:02.126 "assigned_rate_limits": { 00:11:02.126 "rw_ios_per_sec": 0, 00:11:02.126 "rw_mbytes_per_sec": 0, 00:11:02.126 "r_mbytes_per_sec": 0, 00:11:02.126 "w_mbytes_per_sec": 0 00:11:02.126 }, 00:11:02.126 "claimed": false, 00:11:02.126 "zoned": false, 00:11:02.126 "supported_io_types": { 00:11:02.126 "read": true, 00:11:02.126 "write": true, 00:11:02.126 "unmap": true, 00:11:02.126 "flush": true, 00:11:02.126 "reset": true, 00:11:02.126 "nvme_admin": false, 00:11:02.126 "nvme_io": false, 00:11:02.126 "nvme_io_md": false, 00:11:02.126 "write_zeroes": true, 00:11:02.126 "zcopy": true, 00:11:02.126 "get_zone_info": false, 00:11:02.126 "zone_management": false, 00:11:02.126 "zone_append": false, 00:11:02.126 "compare": false, 00:11:02.126 "compare_and_write": false, 00:11:02.126 "abort": true, 00:11:02.126 "seek_hole": false, 00:11:02.126 "seek_data": false, 00:11:02.127 "copy": true, 00:11:02.127 "nvme_iov_md": false 00:11:02.127 }, 00:11:02.127 "memory_domains": [ 00:11:02.127 { 00:11:02.127 "dma_device_id": "system", 00:11:02.127 "dma_device_type": 1 00:11:02.127 }, 00:11:02.127 { 00:11:02.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.127 "dma_device_type": 2 00:11:02.127 } 00:11:02.127 ], 00:11:02.127 "driver_specific": {} 00:11:02.127 } 00:11:02.127 ] 00:11:02.127 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.127 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:02.127 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:02.127 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:02.387 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:02.387 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.387 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.387 BaseBdev3 00:11:02.387 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.387 16:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:02.387 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:02.387 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:02.387 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:02.387 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:02.387 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:02.387 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:02.387 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.387 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.387 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.387 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:02.387 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.387 16:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.387 [ 00:11:02.387 { 00:11:02.387 "name": "BaseBdev3", 00:11:02.387 "aliases": [ 00:11:02.387 "85d509e1-b7c5-46b8-a712-689f4729f928" 00:11:02.387 ], 00:11:02.387 "product_name": "Malloc disk", 00:11:02.387 "block_size": 512, 00:11:02.387 "num_blocks": 65536, 00:11:02.387 "uuid": "85d509e1-b7c5-46b8-a712-689f4729f928", 00:11:02.387 "assigned_rate_limits": { 00:11:02.387 "rw_ios_per_sec": 0, 00:11:02.387 "rw_mbytes_per_sec": 0, 00:11:02.387 "r_mbytes_per_sec": 0, 00:11:02.387 "w_mbytes_per_sec": 0 00:11:02.387 }, 00:11:02.387 "claimed": false, 00:11:02.387 "zoned": false, 00:11:02.387 "supported_io_types": { 00:11:02.387 "read": true, 00:11:02.387 "write": true, 00:11:02.387 "unmap": true, 00:11:02.387 "flush": true, 00:11:02.387 "reset": true, 00:11:02.387 "nvme_admin": false, 00:11:02.387 "nvme_io": false, 00:11:02.387 "nvme_io_md": false, 00:11:02.387 "write_zeroes": true, 00:11:02.387 "zcopy": true, 00:11:02.387 "get_zone_info": false, 00:11:02.387 "zone_management": false, 00:11:02.387 "zone_append": false, 00:11:02.387 "compare": false, 00:11:02.387 "compare_and_write": false, 00:11:02.387 "abort": true, 00:11:02.387 "seek_hole": false, 00:11:02.387 "seek_data": false, 00:11:02.387 "copy": true, 00:11:02.387 "nvme_iov_md": false 00:11:02.387 }, 00:11:02.387 "memory_domains": [ 00:11:02.387 { 00:11:02.387 "dma_device_id": "system", 00:11:02.387 "dma_device_type": 1 00:11:02.387 }, 00:11:02.387 { 00:11:02.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.387 "dma_device_type": 2 00:11:02.387 } 00:11:02.387 ], 00:11:02.387 "driver_specific": {} 00:11:02.387 } 00:11:02.387 ] 00:11:02.387 16:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.387 16:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:02.387 16:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:02.387 16:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:02.387 16:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:02.387 16:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.387 16:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.387 [2024-12-06 16:26:44.021723] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:02.387 [2024-12-06 16:26:44.021865] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:02.387 [2024-12-06 16:26:44.021930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:02.388 [2024-12-06 16:26:44.024046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:02.388 16:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.388 16:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:02.388 16:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.388 16:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.388 16:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:02.388 16:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.388 16:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:02.388 16:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.388 16:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.388 16:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.388 16:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.388 16:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.388 16:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.388 16:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.388 16:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.388 16:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.388 16:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.388 "name": "Existed_Raid", 00:11:02.388 "uuid": "690735fa-9c65-4aee-92ec-3709fb73491d", 00:11:02.388 "strip_size_kb": 64, 00:11:02.388 "state": "configuring", 00:11:02.388 "raid_level": "raid0", 00:11:02.388 "superblock": true, 00:11:02.388 "num_base_bdevs": 3, 00:11:02.388 "num_base_bdevs_discovered": 2, 00:11:02.388 "num_base_bdevs_operational": 3, 00:11:02.388 "base_bdevs_list": [ 00:11:02.388 { 00:11:02.388 "name": "BaseBdev1", 00:11:02.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.388 "is_configured": false, 00:11:02.388 "data_offset": 0, 00:11:02.388 "data_size": 0 00:11:02.388 }, 00:11:02.388 { 00:11:02.388 "name": "BaseBdev2", 00:11:02.388 "uuid": "0d6f4ce1-b51c-4c9d-b5c9-30e1c8efc031", 00:11:02.388 "is_configured": true, 00:11:02.388 "data_offset": 2048, 00:11:02.388 "data_size": 63488 00:11:02.388 }, 00:11:02.388 { 00:11:02.388 "name": "BaseBdev3", 00:11:02.388 "uuid": "85d509e1-b7c5-46b8-a712-689f4729f928", 00:11:02.388 "is_configured": true, 00:11:02.388 "data_offset": 2048, 00:11:02.388 "data_size": 63488 00:11:02.388 } 00:11:02.388 ] 00:11:02.388 }' 00:11:02.388 16:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.388 16:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.646 16:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:02.646 16:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.646 16:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.905 [2024-12-06 16:26:44.484973] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:02.905 16:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.905 16:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:02.905 16:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.905 16:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.905 16:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:02.905 16:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.905 16:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:02.905 16:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.905 16:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.905 16:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.905 16:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.905 16:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.905 16:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.905 16:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.905 16:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.905 16:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.905 16:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.905 "name": "Existed_Raid", 00:11:02.905 "uuid": "690735fa-9c65-4aee-92ec-3709fb73491d", 00:11:02.905 "strip_size_kb": 64, 00:11:02.905 "state": "configuring", 00:11:02.905 "raid_level": "raid0", 00:11:02.905 "superblock": true, 00:11:02.905 "num_base_bdevs": 3, 00:11:02.905 "num_base_bdevs_discovered": 1, 00:11:02.905 "num_base_bdevs_operational": 3, 00:11:02.905 "base_bdevs_list": [ 00:11:02.905 { 00:11:02.905 "name": "BaseBdev1", 00:11:02.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.905 "is_configured": false, 00:11:02.905 "data_offset": 0, 00:11:02.905 "data_size": 0 00:11:02.905 }, 00:11:02.905 { 00:11:02.905 "name": null, 00:11:02.905 "uuid": "0d6f4ce1-b51c-4c9d-b5c9-30e1c8efc031", 00:11:02.905 "is_configured": false, 00:11:02.905 "data_offset": 0, 00:11:02.905 "data_size": 63488 00:11:02.905 }, 00:11:02.905 { 00:11:02.905 "name": "BaseBdev3", 00:11:02.905 "uuid": "85d509e1-b7c5-46b8-a712-689f4729f928", 00:11:02.905 "is_configured": true, 00:11:02.905 "data_offset": 2048, 00:11:02.905 "data_size": 63488 00:11:02.905 } 00:11:02.905 ] 00:11:02.905 }' 00:11:02.905 16:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.905 16:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.164 16:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.164 16:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:03.164 16:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.164 16:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.164 16:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.164 16:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:03.164 16:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:03.164 16:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.164 16:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.424 [2024-12-06 16:26:45.003396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:03.424 BaseBdev1 00:11:03.424 16:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.424 16:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:03.424 16:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:03.424 16:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:03.424 16:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:03.424 16:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:03.424 16:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:03.424 16:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:03.424 16:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.424 16:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.424 16:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.424 16:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:03.424 16:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.424 16:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.424 [ 00:11:03.424 { 00:11:03.424 "name": "BaseBdev1", 00:11:03.424 "aliases": [ 00:11:03.424 "65f2e981-9544-4168-8729-afb25d00099d" 00:11:03.424 ], 00:11:03.424 "product_name": "Malloc disk", 00:11:03.424 "block_size": 512, 00:11:03.424 "num_blocks": 65536, 00:11:03.424 "uuid": "65f2e981-9544-4168-8729-afb25d00099d", 00:11:03.424 "assigned_rate_limits": { 00:11:03.424 "rw_ios_per_sec": 0, 00:11:03.424 "rw_mbytes_per_sec": 0, 00:11:03.424 "r_mbytes_per_sec": 0, 00:11:03.424 "w_mbytes_per_sec": 0 00:11:03.424 }, 00:11:03.424 "claimed": true, 00:11:03.424 "claim_type": "exclusive_write", 00:11:03.424 "zoned": false, 00:11:03.424 "supported_io_types": { 00:11:03.424 "read": true, 00:11:03.424 "write": true, 00:11:03.424 "unmap": true, 00:11:03.424 "flush": true, 00:11:03.424 "reset": true, 00:11:03.424 "nvme_admin": false, 00:11:03.424 "nvme_io": false, 00:11:03.424 "nvme_io_md": false, 00:11:03.424 "write_zeroes": true, 00:11:03.424 "zcopy": true, 00:11:03.424 "get_zone_info": false, 00:11:03.424 "zone_management": false, 00:11:03.424 "zone_append": false, 00:11:03.424 "compare": false, 00:11:03.424 "compare_and_write": false, 00:11:03.424 "abort": true, 00:11:03.424 "seek_hole": false, 00:11:03.424 "seek_data": false, 00:11:03.424 "copy": true, 00:11:03.424 "nvme_iov_md": false 00:11:03.424 }, 00:11:03.424 "memory_domains": [ 00:11:03.424 { 00:11:03.424 "dma_device_id": "system", 00:11:03.424 "dma_device_type": 1 00:11:03.424 }, 00:11:03.424 { 00:11:03.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.424 "dma_device_type": 2 00:11:03.424 } 00:11:03.424 ], 00:11:03.424 "driver_specific": {} 00:11:03.424 } 00:11:03.424 ] 00:11:03.424 16:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.424 16:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:03.424 16:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:03.424 16:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.424 16:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.424 16:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:03.424 16:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.424 16:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:03.424 16:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.424 16:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.424 16:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.424 16:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.424 16:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.424 16:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.424 16:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.424 16:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.424 16:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.424 16:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.424 "name": "Existed_Raid", 00:11:03.424 "uuid": "690735fa-9c65-4aee-92ec-3709fb73491d", 00:11:03.424 "strip_size_kb": 64, 00:11:03.424 "state": "configuring", 00:11:03.424 "raid_level": "raid0", 00:11:03.424 "superblock": true, 00:11:03.424 "num_base_bdevs": 3, 00:11:03.424 "num_base_bdevs_discovered": 2, 00:11:03.424 "num_base_bdevs_operational": 3, 00:11:03.424 "base_bdevs_list": [ 00:11:03.424 { 00:11:03.424 "name": "BaseBdev1", 00:11:03.424 "uuid": "65f2e981-9544-4168-8729-afb25d00099d", 00:11:03.424 "is_configured": true, 00:11:03.424 "data_offset": 2048, 00:11:03.424 "data_size": 63488 00:11:03.424 }, 00:11:03.424 { 00:11:03.424 "name": null, 00:11:03.424 "uuid": "0d6f4ce1-b51c-4c9d-b5c9-30e1c8efc031", 00:11:03.424 "is_configured": false, 00:11:03.424 "data_offset": 0, 00:11:03.424 "data_size": 63488 00:11:03.424 }, 00:11:03.424 { 00:11:03.424 "name": "BaseBdev3", 00:11:03.424 "uuid": "85d509e1-b7c5-46b8-a712-689f4729f928", 00:11:03.424 "is_configured": true, 00:11:03.424 "data_offset": 2048, 00:11:03.424 "data_size": 63488 00:11:03.424 } 00:11:03.424 ] 00:11:03.424 }' 00:11:03.424 16:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.424 16:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.682 16:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.682 16:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.682 16:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.682 16:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:03.941 16:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.941 16:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:03.941 16:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:03.941 16:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.941 16:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.941 [2024-12-06 16:26:45.546560] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:03.941 16:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.941 16:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:03.941 16:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.941 16:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.941 16:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:03.941 16:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.941 16:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:03.941 16:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.941 16:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.941 16:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.941 16:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.941 16:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.941 16:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.941 16:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.941 16:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.941 16:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.941 16:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.941 "name": "Existed_Raid", 00:11:03.941 "uuid": "690735fa-9c65-4aee-92ec-3709fb73491d", 00:11:03.941 "strip_size_kb": 64, 00:11:03.941 "state": "configuring", 00:11:03.941 "raid_level": "raid0", 00:11:03.941 "superblock": true, 00:11:03.941 "num_base_bdevs": 3, 00:11:03.941 "num_base_bdevs_discovered": 1, 00:11:03.941 "num_base_bdevs_operational": 3, 00:11:03.941 "base_bdevs_list": [ 00:11:03.941 { 00:11:03.941 "name": "BaseBdev1", 00:11:03.941 "uuid": "65f2e981-9544-4168-8729-afb25d00099d", 00:11:03.941 "is_configured": true, 00:11:03.941 "data_offset": 2048, 00:11:03.941 "data_size": 63488 00:11:03.941 }, 00:11:03.941 { 00:11:03.941 "name": null, 00:11:03.941 "uuid": "0d6f4ce1-b51c-4c9d-b5c9-30e1c8efc031", 00:11:03.941 "is_configured": false, 00:11:03.941 "data_offset": 0, 00:11:03.941 "data_size": 63488 00:11:03.941 }, 00:11:03.941 { 00:11:03.941 "name": null, 00:11:03.941 "uuid": "85d509e1-b7c5-46b8-a712-689f4729f928", 00:11:03.941 "is_configured": false, 00:11:03.941 "data_offset": 0, 00:11:03.941 "data_size": 63488 00:11:03.941 } 00:11:03.941 ] 00:11:03.941 }' 00:11:03.941 16:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.941 16:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.200 16:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.200 16:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.200 16:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.200 16:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:04.200 16:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.200 16:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:04.200 16:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:04.200 16:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.200 16:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.200 [2024-12-06 16:26:45.981827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:04.200 16:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.200 16:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:04.200 16:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.200 16:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.200 16:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:04.200 16:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.200 16:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:04.200 16:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.200 16:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.200 16:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.200 16:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.200 16:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.200 16:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.200 16:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.200 16:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.200 16:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.458 16:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.458 "name": "Existed_Raid", 00:11:04.458 "uuid": "690735fa-9c65-4aee-92ec-3709fb73491d", 00:11:04.458 "strip_size_kb": 64, 00:11:04.458 "state": "configuring", 00:11:04.458 "raid_level": "raid0", 00:11:04.458 "superblock": true, 00:11:04.458 "num_base_bdevs": 3, 00:11:04.458 "num_base_bdevs_discovered": 2, 00:11:04.458 "num_base_bdevs_operational": 3, 00:11:04.458 "base_bdevs_list": [ 00:11:04.458 { 00:11:04.458 "name": "BaseBdev1", 00:11:04.458 "uuid": "65f2e981-9544-4168-8729-afb25d00099d", 00:11:04.458 "is_configured": true, 00:11:04.458 "data_offset": 2048, 00:11:04.458 "data_size": 63488 00:11:04.458 }, 00:11:04.458 { 00:11:04.458 "name": null, 00:11:04.458 "uuid": "0d6f4ce1-b51c-4c9d-b5c9-30e1c8efc031", 00:11:04.458 "is_configured": false, 00:11:04.458 "data_offset": 0, 00:11:04.458 "data_size": 63488 00:11:04.458 }, 00:11:04.458 { 00:11:04.458 "name": "BaseBdev3", 00:11:04.458 "uuid": "85d509e1-b7c5-46b8-a712-689f4729f928", 00:11:04.458 "is_configured": true, 00:11:04.458 "data_offset": 2048, 00:11:04.458 "data_size": 63488 00:11:04.458 } 00:11:04.458 ] 00:11:04.458 }' 00:11:04.458 16:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.458 16:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.717 16:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:04.717 16:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.717 16:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.717 16:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.717 16:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.717 16:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:04.717 16:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:04.717 16:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.717 16:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.717 [2024-12-06 16:26:46.489018] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:04.717 16:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.717 16:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:04.717 16:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.717 16:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.717 16:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:04.717 16:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.717 16:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:04.717 16:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.718 16:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.718 16:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.718 16:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.718 16:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.718 16:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.718 16:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.718 16:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.718 16:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.718 16:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.718 "name": "Existed_Raid", 00:11:04.718 "uuid": "690735fa-9c65-4aee-92ec-3709fb73491d", 00:11:04.718 "strip_size_kb": 64, 00:11:04.718 "state": "configuring", 00:11:04.718 "raid_level": "raid0", 00:11:04.718 "superblock": true, 00:11:04.718 "num_base_bdevs": 3, 00:11:04.718 "num_base_bdevs_discovered": 1, 00:11:04.718 "num_base_bdevs_operational": 3, 00:11:04.718 "base_bdevs_list": [ 00:11:04.718 { 00:11:04.718 "name": null, 00:11:04.718 "uuid": "65f2e981-9544-4168-8729-afb25d00099d", 00:11:04.718 "is_configured": false, 00:11:04.718 "data_offset": 0, 00:11:04.718 "data_size": 63488 00:11:04.718 }, 00:11:04.718 { 00:11:04.718 "name": null, 00:11:04.718 "uuid": "0d6f4ce1-b51c-4c9d-b5c9-30e1c8efc031", 00:11:04.718 "is_configured": false, 00:11:04.718 "data_offset": 0, 00:11:04.718 "data_size": 63488 00:11:04.718 }, 00:11:04.718 { 00:11:04.718 "name": "BaseBdev3", 00:11:04.718 "uuid": "85d509e1-b7c5-46b8-a712-689f4729f928", 00:11:04.718 "is_configured": true, 00:11:04.718 "data_offset": 2048, 00:11:04.718 "data_size": 63488 00:11:04.718 } 00:11:04.718 ] 00:11:04.718 }' 00:11:04.718 16:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.718 16:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.286 16:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.286 16:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:05.286 16:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.286 16:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.286 16:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.286 16:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:05.286 16:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:05.286 16:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.286 16:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.286 [2024-12-06 16:26:46.986805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:05.286 16:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.286 16:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:05.286 16:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.286 16:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.286 16:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:05.286 16:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.286 16:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:05.286 16:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.286 16:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.286 16:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.286 16:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.286 16:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.286 16:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.286 16:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.286 16:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.286 16:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.286 16:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.286 "name": "Existed_Raid", 00:11:05.286 "uuid": "690735fa-9c65-4aee-92ec-3709fb73491d", 00:11:05.286 "strip_size_kb": 64, 00:11:05.286 "state": "configuring", 00:11:05.286 "raid_level": "raid0", 00:11:05.286 "superblock": true, 00:11:05.286 "num_base_bdevs": 3, 00:11:05.286 "num_base_bdevs_discovered": 2, 00:11:05.286 "num_base_bdevs_operational": 3, 00:11:05.286 "base_bdevs_list": [ 00:11:05.286 { 00:11:05.286 "name": null, 00:11:05.286 "uuid": "65f2e981-9544-4168-8729-afb25d00099d", 00:11:05.286 "is_configured": false, 00:11:05.286 "data_offset": 0, 00:11:05.286 "data_size": 63488 00:11:05.286 }, 00:11:05.286 { 00:11:05.286 "name": "BaseBdev2", 00:11:05.286 "uuid": "0d6f4ce1-b51c-4c9d-b5c9-30e1c8efc031", 00:11:05.286 "is_configured": true, 00:11:05.286 "data_offset": 2048, 00:11:05.286 "data_size": 63488 00:11:05.286 }, 00:11:05.286 { 00:11:05.286 "name": "BaseBdev3", 00:11:05.286 "uuid": "85d509e1-b7c5-46b8-a712-689f4729f928", 00:11:05.286 "is_configured": true, 00:11:05.286 "data_offset": 2048, 00:11:05.286 "data_size": 63488 00:11:05.286 } 00:11:05.286 ] 00:11:05.286 }' 00:11:05.286 16:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.286 16:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.856 16:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:05.856 16:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.856 16:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.856 16:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.856 16:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.856 16:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:05.856 16:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.856 16:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.856 16:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.856 16:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:05.856 16:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.856 16:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 65f2e981-9544-4168-8729-afb25d00099d 00:11:05.856 16:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.856 16:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.856 [2024-12-06 16:26:47.549362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:05.856 NewBaseBdev 00:11:05.856 [2024-12-06 16:26:47.549669] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:11:05.856 [2024-12-06 16:26:47.549693] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:05.856 [2024-12-06 16:26:47.549969] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:05.856 [2024-12-06 16:26:47.550096] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:11:05.856 [2024-12-06 16:26:47.550106] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:11:05.856 [2024-12-06 16:26:47.550244] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:05.856 16:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.856 16:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:05.856 16:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:05.856 16:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:05.856 16:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:05.856 16:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:05.856 16:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:05.856 16:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:05.856 16:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.856 16:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.856 16:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.856 16:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:05.856 16:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.856 16:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.856 [ 00:11:05.856 { 00:11:05.856 "name": "NewBaseBdev", 00:11:05.856 "aliases": [ 00:11:05.856 "65f2e981-9544-4168-8729-afb25d00099d" 00:11:05.856 ], 00:11:05.856 "product_name": "Malloc disk", 00:11:05.856 "block_size": 512, 00:11:05.856 "num_blocks": 65536, 00:11:05.856 "uuid": "65f2e981-9544-4168-8729-afb25d00099d", 00:11:05.856 "assigned_rate_limits": { 00:11:05.856 "rw_ios_per_sec": 0, 00:11:05.856 "rw_mbytes_per_sec": 0, 00:11:05.856 "r_mbytes_per_sec": 0, 00:11:05.856 "w_mbytes_per_sec": 0 00:11:05.856 }, 00:11:05.856 "claimed": true, 00:11:05.856 "claim_type": "exclusive_write", 00:11:05.856 "zoned": false, 00:11:05.856 "supported_io_types": { 00:11:05.856 "read": true, 00:11:05.856 "write": true, 00:11:05.856 "unmap": true, 00:11:05.856 "flush": true, 00:11:05.856 "reset": true, 00:11:05.856 "nvme_admin": false, 00:11:05.856 "nvme_io": false, 00:11:05.856 "nvme_io_md": false, 00:11:05.856 "write_zeroes": true, 00:11:05.856 "zcopy": true, 00:11:05.856 "get_zone_info": false, 00:11:05.856 "zone_management": false, 00:11:05.856 "zone_append": false, 00:11:05.856 "compare": false, 00:11:05.856 "compare_and_write": false, 00:11:05.856 "abort": true, 00:11:05.856 "seek_hole": false, 00:11:05.856 "seek_data": false, 00:11:05.856 "copy": true, 00:11:05.856 "nvme_iov_md": false 00:11:05.856 }, 00:11:05.856 "memory_domains": [ 00:11:05.856 { 00:11:05.856 "dma_device_id": "system", 00:11:05.856 "dma_device_type": 1 00:11:05.856 }, 00:11:05.856 { 00:11:05.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.856 "dma_device_type": 2 00:11:05.856 } 00:11:05.856 ], 00:11:05.856 "driver_specific": {} 00:11:05.856 } 00:11:05.856 ] 00:11:05.856 16:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.856 16:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:05.856 16:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:05.856 16:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.856 16:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:05.856 16:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:05.856 16:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.856 16:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:05.856 16:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.856 16:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.856 16:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.856 16:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.856 16:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.856 16:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.856 16:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.856 16:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.856 16:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.856 16:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.856 "name": "Existed_Raid", 00:11:05.856 "uuid": "690735fa-9c65-4aee-92ec-3709fb73491d", 00:11:05.856 "strip_size_kb": 64, 00:11:05.856 "state": "online", 00:11:05.856 "raid_level": "raid0", 00:11:05.856 "superblock": true, 00:11:05.856 "num_base_bdevs": 3, 00:11:05.856 "num_base_bdevs_discovered": 3, 00:11:05.856 "num_base_bdevs_operational": 3, 00:11:05.856 "base_bdevs_list": [ 00:11:05.856 { 00:11:05.856 "name": "NewBaseBdev", 00:11:05.856 "uuid": "65f2e981-9544-4168-8729-afb25d00099d", 00:11:05.856 "is_configured": true, 00:11:05.856 "data_offset": 2048, 00:11:05.856 "data_size": 63488 00:11:05.856 }, 00:11:05.856 { 00:11:05.856 "name": "BaseBdev2", 00:11:05.856 "uuid": "0d6f4ce1-b51c-4c9d-b5c9-30e1c8efc031", 00:11:05.856 "is_configured": true, 00:11:05.856 "data_offset": 2048, 00:11:05.856 "data_size": 63488 00:11:05.856 }, 00:11:05.856 { 00:11:05.856 "name": "BaseBdev3", 00:11:05.856 "uuid": "85d509e1-b7c5-46b8-a712-689f4729f928", 00:11:05.856 "is_configured": true, 00:11:05.856 "data_offset": 2048, 00:11:05.856 "data_size": 63488 00:11:05.856 } 00:11:05.856 ] 00:11:05.856 }' 00:11:05.856 16:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.856 16:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.426 16:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:06.426 16:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:06.426 16:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:06.426 16:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:06.426 16:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:06.426 16:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:06.426 16:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:06.426 16:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:06.426 16:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.426 16:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.426 [2024-12-06 16:26:48.060909] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:06.426 16:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.427 16:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:06.427 "name": "Existed_Raid", 00:11:06.427 "aliases": [ 00:11:06.427 "690735fa-9c65-4aee-92ec-3709fb73491d" 00:11:06.427 ], 00:11:06.427 "product_name": "Raid Volume", 00:11:06.427 "block_size": 512, 00:11:06.427 "num_blocks": 190464, 00:11:06.427 "uuid": "690735fa-9c65-4aee-92ec-3709fb73491d", 00:11:06.427 "assigned_rate_limits": { 00:11:06.427 "rw_ios_per_sec": 0, 00:11:06.427 "rw_mbytes_per_sec": 0, 00:11:06.427 "r_mbytes_per_sec": 0, 00:11:06.427 "w_mbytes_per_sec": 0 00:11:06.427 }, 00:11:06.427 "claimed": false, 00:11:06.427 "zoned": false, 00:11:06.427 "supported_io_types": { 00:11:06.427 "read": true, 00:11:06.427 "write": true, 00:11:06.427 "unmap": true, 00:11:06.427 "flush": true, 00:11:06.427 "reset": true, 00:11:06.427 "nvme_admin": false, 00:11:06.427 "nvme_io": false, 00:11:06.427 "nvme_io_md": false, 00:11:06.427 "write_zeroes": true, 00:11:06.427 "zcopy": false, 00:11:06.427 "get_zone_info": false, 00:11:06.427 "zone_management": false, 00:11:06.427 "zone_append": false, 00:11:06.427 "compare": false, 00:11:06.427 "compare_and_write": false, 00:11:06.427 "abort": false, 00:11:06.427 "seek_hole": false, 00:11:06.427 "seek_data": false, 00:11:06.427 "copy": false, 00:11:06.427 "nvme_iov_md": false 00:11:06.427 }, 00:11:06.427 "memory_domains": [ 00:11:06.427 { 00:11:06.427 "dma_device_id": "system", 00:11:06.427 "dma_device_type": 1 00:11:06.427 }, 00:11:06.427 { 00:11:06.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.427 "dma_device_type": 2 00:11:06.427 }, 00:11:06.427 { 00:11:06.427 "dma_device_id": "system", 00:11:06.427 "dma_device_type": 1 00:11:06.427 }, 00:11:06.427 { 00:11:06.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.427 "dma_device_type": 2 00:11:06.427 }, 00:11:06.427 { 00:11:06.427 "dma_device_id": "system", 00:11:06.427 "dma_device_type": 1 00:11:06.427 }, 00:11:06.427 { 00:11:06.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.427 "dma_device_type": 2 00:11:06.427 } 00:11:06.427 ], 00:11:06.427 "driver_specific": { 00:11:06.427 "raid": { 00:11:06.427 "uuid": "690735fa-9c65-4aee-92ec-3709fb73491d", 00:11:06.427 "strip_size_kb": 64, 00:11:06.427 "state": "online", 00:11:06.427 "raid_level": "raid0", 00:11:06.427 "superblock": true, 00:11:06.427 "num_base_bdevs": 3, 00:11:06.427 "num_base_bdevs_discovered": 3, 00:11:06.427 "num_base_bdevs_operational": 3, 00:11:06.427 "base_bdevs_list": [ 00:11:06.427 { 00:11:06.427 "name": "NewBaseBdev", 00:11:06.427 "uuid": "65f2e981-9544-4168-8729-afb25d00099d", 00:11:06.427 "is_configured": true, 00:11:06.427 "data_offset": 2048, 00:11:06.427 "data_size": 63488 00:11:06.427 }, 00:11:06.427 { 00:11:06.427 "name": "BaseBdev2", 00:11:06.427 "uuid": "0d6f4ce1-b51c-4c9d-b5c9-30e1c8efc031", 00:11:06.427 "is_configured": true, 00:11:06.427 "data_offset": 2048, 00:11:06.427 "data_size": 63488 00:11:06.427 }, 00:11:06.427 { 00:11:06.427 "name": "BaseBdev3", 00:11:06.427 "uuid": "85d509e1-b7c5-46b8-a712-689f4729f928", 00:11:06.427 "is_configured": true, 00:11:06.427 "data_offset": 2048, 00:11:06.427 "data_size": 63488 00:11:06.427 } 00:11:06.427 ] 00:11:06.427 } 00:11:06.427 } 00:11:06.427 }' 00:11:06.427 16:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:06.427 16:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:06.427 BaseBdev2 00:11:06.427 BaseBdev3' 00:11:06.427 16:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.427 16:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:06.427 16:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:06.427 16:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:06.427 16:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.427 16:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.427 16:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.427 16:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.427 16:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.427 16:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.427 16:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:06.427 16:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:06.427 16:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.427 16:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.427 16:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.687 16:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.687 16:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.687 16:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.687 16:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:06.687 16:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.687 16:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:06.687 16:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.687 16:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.687 16:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.687 16:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.687 16:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.687 16:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:06.687 16:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.687 16:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.687 [2024-12-06 16:26:48.336090] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:06.687 [2024-12-06 16:26:48.336123] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:06.687 [2024-12-06 16:26:48.336233] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:06.687 [2024-12-06 16:26:48.336297] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:06.687 [2024-12-06 16:26:48.336316] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:11:06.687 16:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.687 16:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 76012 00:11:06.687 16:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 76012 ']' 00:11:06.687 16:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 76012 00:11:06.687 16:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:06.687 16:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:06.687 16:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76012 00:11:06.687 16:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:06.687 16:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:06.687 16:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76012' 00:11:06.687 killing process with pid 76012 00:11:06.687 16:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 76012 00:11:06.687 [2024-12-06 16:26:48.387128] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:06.687 16:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 76012 00:11:06.687 [2024-12-06 16:26:48.420044] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:06.948 16:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:06.948 00:11:06.948 real 0m9.058s 00:11:06.948 user 0m15.453s 00:11:06.948 sys 0m1.879s 00:11:06.948 16:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:06.948 16:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.948 ************************************ 00:11:06.948 END TEST raid_state_function_test_sb 00:11:06.948 ************************************ 00:11:06.948 16:26:48 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:11:06.948 16:26:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:06.948 16:26:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:06.948 16:26:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:06.948 ************************************ 00:11:06.948 START TEST raid_superblock_test 00:11:06.948 ************************************ 00:11:06.948 16:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:11:06.948 16:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:11:06.948 16:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:11:06.948 16:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:06.948 16:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:06.948 16:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:06.948 16:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:06.948 16:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:06.948 16:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:06.948 16:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:06.948 16:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:06.948 16:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:06.948 16:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:06.948 16:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:06.948 16:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:11:06.948 16:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:06.948 16:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:06.948 16:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=76621 00:11:06.948 16:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:06.948 16:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 76621 00:11:06.948 16:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 76621 ']' 00:11:06.948 16:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.948 16:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:06.948 16:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.948 16:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:06.948 16:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.208 [2024-12-06 16:26:48.801702] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:11:07.208 [2024-12-06 16:26:48.801939] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76621 ] 00:11:07.208 [2024-12-06 16:26:48.956289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.208 [2024-12-06 16:26:48.989019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.208 [2024-12-06 16:26:49.040258] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:07.208 [2024-12-06 16:26:49.040309] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.148 malloc1 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.148 [2024-12-06 16:26:49.710633] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:08.148 [2024-12-06 16:26:49.710702] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.148 [2024-12-06 16:26:49.710723] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:08.148 [2024-12-06 16:26:49.710746] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.148 [2024-12-06 16:26:49.713076] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.148 [2024-12-06 16:26:49.713221] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:08.148 pt1 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.148 malloc2 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.148 [2024-12-06 16:26:49.739963] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:08.148 [2024-12-06 16:26:49.740091] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.148 [2024-12-06 16:26:49.740129] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:08.148 [2024-12-06 16:26:49.740159] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.148 [2024-12-06 16:26:49.742405] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.148 [2024-12-06 16:26:49.742479] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:08.148 pt2 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.148 malloc3 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.148 [2024-12-06 16:26:49.773362] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:08.148 [2024-12-06 16:26:49.773489] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.148 [2024-12-06 16:26:49.773530] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:08.148 [2024-12-06 16:26:49.773561] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.148 [2024-12-06 16:26:49.775935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.148 [2024-12-06 16:26:49.776025] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:08.148 pt3 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.148 [2024-12-06 16:26:49.785399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:08.148 [2024-12-06 16:26:49.787286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:08.148 [2024-12-06 16:26:49.787385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:08.148 [2024-12-06 16:26:49.787535] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:11:08.148 [2024-12-06 16:26:49.787548] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:08.148 [2024-12-06 16:26:49.787879] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:11:08.148 [2024-12-06 16:26:49.788039] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:11:08.148 [2024-12-06 16:26:49.788051] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:11:08.148 [2024-12-06 16:26:49.788221] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:08.148 16:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.149 16:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:08.149 16:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.149 16:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.149 16:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.149 16:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.149 16:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.149 16:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:08.149 16:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.149 16:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.149 16:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.149 16:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.149 "name": "raid_bdev1", 00:11:08.149 "uuid": "7595bfda-20b5-472f-aac0-0896e33a7fca", 00:11:08.149 "strip_size_kb": 64, 00:11:08.149 "state": "online", 00:11:08.149 "raid_level": "raid0", 00:11:08.149 "superblock": true, 00:11:08.149 "num_base_bdevs": 3, 00:11:08.149 "num_base_bdevs_discovered": 3, 00:11:08.149 "num_base_bdevs_operational": 3, 00:11:08.149 "base_bdevs_list": [ 00:11:08.149 { 00:11:08.149 "name": "pt1", 00:11:08.149 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:08.149 "is_configured": true, 00:11:08.149 "data_offset": 2048, 00:11:08.149 "data_size": 63488 00:11:08.149 }, 00:11:08.149 { 00:11:08.149 "name": "pt2", 00:11:08.149 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:08.149 "is_configured": true, 00:11:08.149 "data_offset": 2048, 00:11:08.149 "data_size": 63488 00:11:08.149 }, 00:11:08.149 { 00:11:08.149 "name": "pt3", 00:11:08.149 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:08.149 "is_configured": true, 00:11:08.149 "data_offset": 2048, 00:11:08.149 "data_size": 63488 00:11:08.149 } 00:11:08.149 ] 00:11:08.149 }' 00:11:08.149 16:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.149 16:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.408 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:08.408 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:08.408 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:08.408 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:08.408 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:08.408 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:08.408 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:08.408 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:08.408 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.408 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.408 [2024-12-06 16:26:50.244984] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:08.668 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.668 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:08.668 "name": "raid_bdev1", 00:11:08.668 "aliases": [ 00:11:08.668 "7595bfda-20b5-472f-aac0-0896e33a7fca" 00:11:08.668 ], 00:11:08.668 "product_name": "Raid Volume", 00:11:08.668 "block_size": 512, 00:11:08.668 "num_blocks": 190464, 00:11:08.668 "uuid": "7595bfda-20b5-472f-aac0-0896e33a7fca", 00:11:08.668 "assigned_rate_limits": { 00:11:08.668 "rw_ios_per_sec": 0, 00:11:08.668 "rw_mbytes_per_sec": 0, 00:11:08.668 "r_mbytes_per_sec": 0, 00:11:08.668 "w_mbytes_per_sec": 0 00:11:08.668 }, 00:11:08.668 "claimed": false, 00:11:08.668 "zoned": false, 00:11:08.668 "supported_io_types": { 00:11:08.668 "read": true, 00:11:08.668 "write": true, 00:11:08.668 "unmap": true, 00:11:08.668 "flush": true, 00:11:08.668 "reset": true, 00:11:08.668 "nvme_admin": false, 00:11:08.668 "nvme_io": false, 00:11:08.669 "nvme_io_md": false, 00:11:08.669 "write_zeroes": true, 00:11:08.669 "zcopy": false, 00:11:08.669 "get_zone_info": false, 00:11:08.669 "zone_management": false, 00:11:08.669 "zone_append": false, 00:11:08.669 "compare": false, 00:11:08.669 "compare_and_write": false, 00:11:08.669 "abort": false, 00:11:08.669 "seek_hole": false, 00:11:08.669 "seek_data": false, 00:11:08.669 "copy": false, 00:11:08.669 "nvme_iov_md": false 00:11:08.669 }, 00:11:08.669 "memory_domains": [ 00:11:08.669 { 00:11:08.669 "dma_device_id": "system", 00:11:08.669 "dma_device_type": 1 00:11:08.669 }, 00:11:08.669 { 00:11:08.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.669 "dma_device_type": 2 00:11:08.669 }, 00:11:08.669 { 00:11:08.669 "dma_device_id": "system", 00:11:08.669 "dma_device_type": 1 00:11:08.669 }, 00:11:08.669 { 00:11:08.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.669 "dma_device_type": 2 00:11:08.669 }, 00:11:08.669 { 00:11:08.669 "dma_device_id": "system", 00:11:08.669 "dma_device_type": 1 00:11:08.669 }, 00:11:08.669 { 00:11:08.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.669 "dma_device_type": 2 00:11:08.669 } 00:11:08.669 ], 00:11:08.669 "driver_specific": { 00:11:08.669 "raid": { 00:11:08.669 "uuid": "7595bfda-20b5-472f-aac0-0896e33a7fca", 00:11:08.669 "strip_size_kb": 64, 00:11:08.669 "state": "online", 00:11:08.669 "raid_level": "raid0", 00:11:08.669 "superblock": true, 00:11:08.669 "num_base_bdevs": 3, 00:11:08.669 "num_base_bdevs_discovered": 3, 00:11:08.669 "num_base_bdevs_operational": 3, 00:11:08.669 "base_bdevs_list": [ 00:11:08.669 { 00:11:08.669 "name": "pt1", 00:11:08.669 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:08.669 "is_configured": true, 00:11:08.669 "data_offset": 2048, 00:11:08.669 "data_size": 63488 00:11:08.669 }, 00:11:08.669 { 00:11:08.669 "name": "pt2", 00:11:08.669 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:08.669 "is_configured": true, 00:11:08.669 "data_offset": 2048, 00:11:08.669 "data_size": 63488 00:11:08.669 }, 00:11:08.669 { 00:11:08.669 "name": "pt3", 00:11:08.669 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:08.669 "is_configured": true, 00:11:08.669 "data_offset": 2048, 00:11:08.669 "data_size": 63488 00:11:08.669 } 00:11:08.669 ] 00:11:08.669 } 00:11:08.669 } 00:11:08.669 }' 00:11:08.669 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:08.669 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:08.669 pt2 00:11:08.669 pt3' 00:11:08.669 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.669 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:08.669 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:08.669 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:08.669 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.669 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.669 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.669 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.669 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:08.669 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:08.669 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:08.669 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:08.669 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.669 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.669 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.669 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.669 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:08.669 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:08.669 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:08.669 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:08.669 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.669 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.669 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.669 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.929 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:08.929 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:08.929 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:08.929 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:08.929 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.929 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.929 [2024-12-06 16:26:50.552456] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7595bfda-20b5-472f-aac0-0896e33a7fca 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7595bfda-20b5-472f-aac0-0896e33a7fca ']' 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.930 [2024-12-06 16:26:50.596026] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:08.930 [2024-12-06 16:26:50.596062] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:08.930 [2024-12-06 16:26:50.596168] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:08.930 [2024-12-06 16:26:50.596246] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:08.930 [2024-12-06 16:26:50.596260] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.930 [2024-12-06 16:26:50.735961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:08.930 [2024-12-06 16:26:50.737987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:08.930 [2024-12-06 16:26:50.738040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:08.930 [2024-12-06 16:26:50.738093] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:08.930 [2024-12-06 16:26:50.738143] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:08.930 [2024-12-06 16:26:50.738163] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:08.930 [2024-12-06 16:26:50.738177] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:08.930 [2024-12-06 16:26:50.738197] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:11:08.930 request: 00:11:08.930 { 00:11:08.930 "name": "raid_bdev1", 00:11:08.930 "raid_level": "raid0", 00:11:08.930 "base_bdevs": [ 00:11:08.930 "malloc1", 00:11:08.930 "malloc2", 00:11:08.930 "malloc3" 00:11:08.930 ], 00:11:08.930 "strip_size_kb": 64, 00:11:08.930 "superblock": false, 00:11:08.930 "method": "bdev_raid_create", 00:11:08.930 "req_id": 1 00:11:08.930 } 00:11:08.930 Got JSON-RPC error response 00:11:08.930 response: 00:11:08.930 { 00:11:08.930 "code": -17, 00:11:08.930 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:08.930 } 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.930 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.189 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:09.189 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:09.189 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:09.189 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.189 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.189 [2024-12-06 16:26:50.795822] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:09.189 [2024-12-06 16:26:50.795964] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.189 [2024-12-06 16:26:50.796019] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:09.189 [2024-12-06 16:26:50.796071] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.189 [2024-12-06 16:26:50.798559] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.189 [2024-12-06 16:26:50.798672] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:09.189 [2024-12-06 16:26:50.798842] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:09.189 [2024-12-06 16:26:50.798927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:09.189 pt1 00:11:09.189 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.189 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:11:09.189 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:09.189 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.189 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:09.189 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.189 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:09.189 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.189 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.189 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.189 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.189 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.189 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.189 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.189 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.189 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.189 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.189 "name": "raid_bdev1", 00:11:09.189 "uuid": "7595bfda-20b5-472f-aac0-0896e33a7fca", 00:11:09.189 "strip_size_kb": 64, 00:11:09.189 "state": "configuring", 00:11:09.189 "raid_level": "raid0", 00:11:09.189 "superblock": true, 00:11:09.189 "num_base_bdevs": 3, 00:11:09.189 "num_base_bdevs_discovered": 1, 00:11:09.189 "num_base_bdevs_operational": 3, 00:11:09.189 "base_bdevs_list": [ 00:11:09.189 { 00:11:09.189 "name": "pt1", 00:11:09.189 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:09.189 "is_configured": true, 00:11:09.189 "data_offset": 2048, 00:11:09.189 "data_size": 63488 00:11:09.189 }, 00:11:09.189 { 00:11:09.189 "name": null, 00:11:09.189 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:09.189 "is_configured": false, 00:11:09.189 "data_offset": 2048, 00:11:09.189 "data_size": 63488 00:11:09.189 }, 00:11:09.189 { 00:11:09.189 "name": null, 00:11:09.189 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:09.189 "is_configured": false, 00:11:09.189 "data_offset": 2048, 00:11:09.189 "data_size": 63488 00:11:09.189 } 00:11:09.189 ] 00:11:09.189 }' 00:11:09.189 16:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.189 16:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.449 16:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:11:09.449 16:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:09.449 16:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.449 16:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.449 [2024-12-06 16:26:51.275065] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:09.449 [2024-12-06 16:26:51.275145] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.449 [2024-12-06 16:26:51.275170] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:09.449 [2024-12-06 16:26:51.275186] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.449 [2024-12-06 16:26:51.275698] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.449 [2024-12-06 16:26:51.275741] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:09.449 [2024-12-06 16:26:51.275838] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:09.449 [2024-12-06 16:26:51.275878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:09.449 pt2 00:11:09.449 16:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.449 16:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:09.449 16:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.449 16:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.724 [2024-12-06 16:26:51.287060] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:09.724 16:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.724 16:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:11:09.724 16:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:09.724 16:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.724 16:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:09.724 16:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.724 16:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:09.724 16:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.724 16:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.724 16:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.724 16:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.724 16:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.724 16:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.724 16:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.724 16:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.724 16:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.724 16:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.724 "name": "raid_bdev1", 00:11:09.724 "uuid": "7595bfda-20b5-472f-aac0-0896e33a7fca", 00:11:09.724 "strip_size_kb": 64, 00:11:09.724 "state": "configuring", 00:11:09.724 "raid_level": "raid0", 00:11:09.724 "superblock": true, 00:11:09.724 "num_base_bdevs": 3, 00:11:09.724 "num_base_bdevs_discovered": 1, 00:11:09.724 "num_base_bdevs_operational": 3, 00:11:09.724 "base_bdevs_list": [ 00:11:09.724 { 00:11:09.724 "name": "pt1", 00:11:09.724 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:09.724 "is_configured": true, 00:11:09.724 "data_offset": 2048, 00:11:09.724 "data_size": 63488 00:11:09.724 }, 00:11:09.724 { 00:11:09.724 "name": null, 00:11:09.724 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:09.724 "is_configured": false, 00:11:09.724 "data_offset": 0, 00:11:09.724 "data_size": 63488 00:11:09.724 }, 00:11:09.724 { 00:11:09.724 "name": null, 00:11:09.724 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:09.724 "is_configured": false, 00:11:09.724 "data_offset": 2048, 00:11:09.724 "data_size": 63488 00:11:09.724 } 00:11:09.724 ] 00:11:09.724 }' 00:11:09.724 16:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.724 16:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.983 16:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:09.983 16:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:09.983 16:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:09.983 16:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.983 16:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.983 [2024-12-06 16:26:51.730278] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:09.983 [2024-12-06 16:26:51.730429] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.983 [2024-12-06 16:26:51.730481] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:09.983 [2024-12-06 16:26:51.730525] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.983 [2024-12-06 16:26:51.730993] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.983 [2024-12-06 16:26:51.731058] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:09.983 [2024-12-06 16:26:51.731182] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:09.983 [2024-12-06 16:26:51.731254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:09.983 pt2 00:11:09.983 16:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.983 16:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:09.983 16:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:09.983 16:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:09.983 16:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.983 16:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.983 [2024-12-06 16:26:51.742244] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:09.983 [2024-12-06 16:26:51.742336] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.983 [2024-12-06 16:26:51.742383] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:09.983 [2024-12-06 16:26:51.742417] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.983 [2024-12-06 16:26:51.742796] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.983 [2024-12-06 16:26:51.742857] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:09.983 [2024-12-06 16:26:51.742948] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:09.983 [2024-12-06 16:26:51.742997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:09.983 [2024-12-06 16:26:51.743144] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:11:09.983 [2024-12-06 16:26:51.743190] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:09.983 [2024-12-06 16:26:51.743499] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:09.983 [2024-12-06 16:26:51.743643] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:11:09.983 [2024-12-06 16:26:51.743684] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:11:09.983 [2024-12-06 16:26:51.743855] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:09.983 pt3 00:11:09.983 16:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.984 16:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:09.984 16:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:09.984 16:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:09.984 16:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:09.984 16:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:09.984 16:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:09.984 16:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.984 16:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:09.984 16:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.984 16:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.984 16:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.984 16:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.984 16:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.984 16:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.984 16:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.984 16:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.984 16:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.984 16:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.984 "name": "raid_bdev1", 00:11:09.984 "uuid": "7595bfda-20b5-472f-aac0-0896e33a7fca", 00:11:09.984 "strip_size_kb": 64, 00:11:09.984 "state": "online", 00:11:09.984 "raid_level": "raid0", 00:11:09.984 "superblock": true, 00:11:09.984 "num_base_bdevs": 3, 00:11:09.984 "num_base_bdevs_discovered": 3, 00:11:09.984 "num_base_bdevs_operational": 3, 00:11:09.984 "base_bdevs_list": [ 00:11:09.984 { 00:11:09.984 "name": "pt1", 00:11:09.984 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:09.984 "is_configured": true, 00:11:09.984 "data_offset": 2048, 00:11:09.984 "data_size": 63488 00:11:09.984 }, 00:11:09.984 { 00:11:09.984 "name": "pt2", 00:11:09.984 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:09.984 "is_configured": true, 00:11:09.984 "data_offset": 2048, 00:11:09.984 "data_size": 63488 00:11:09.984 }, 00:11:09.984 { 00:11:09.984 "name": "pt3", 00:11:09.984 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:09.984 "is_configured": true, 00:11:09.984 "data_offset": 2048, 00:11:09.984 "data_size": 63488 00:11:09.984 } 00:11:09.984 ] 00:11:09.984 }' 00:11:09.984 16:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.984 16:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.552 16:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:10.552 16:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:10.552 16:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:10.552 16:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:10.552 16:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:10.553 16:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:10.553 16:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:10.553 16:26:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.553 16:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:10.553 16:26:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.553 [2024-12-06 16:26:52.181818] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:10.553 16:26:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.553 16:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:10.553 "name": "raid_bdev1", 00:11:10.553 "aliases": [ 00:11:10.553 "7595bfda-20b5-472f-aac0-0896e33a7fca" 00:11:10.553 ], 00:11:10.553 "product_name": "Raid Volume", 00:11:10.553 "block_size": 512, 00:11:10.553 "num_blocks": 190464, 00:11:10.553 "uuid": "7595bfda-20b5-472f-aac0-0896e33a7fca", 00:11:10.553 "assigned_rate_limits": { 00:11:10.553 "rw_ios_per_sec": 0, 00:11:10.553 "rw_mbytes_per_sec": 0, 00:11:10.553 "r_mbytes_per_sec": 0, 00:11:10.553 "w_mbytes_per_sec": 0 00:11:10.553 }, 00:11:10.553 "claimed": false, 00:11:10.553 "zoned": false, 00:11:10.553 "supported_io_types": { 00:11:10.553 "read": true, 00:11:10.553 "write": true, 00:11:10.553 "unmap": true, 00:11:10.553 "flush": true, 00:11:10.553 "reset": true, 00:11:10.553 "nvme_admin": false, 00:11:10.553 "nvme_io": false, 00:11:10.553 "nvme_io_md": false, 00:11:10.553 "write_zeroes": true, 00:11:10.553 "zcopy": false, 00:11:10.553 "get_zone_info": false, 00:11:10.553 "zone_management": false, 00:11:10.553 "zone_append": false, 00:11:10.553 "compare": false, 00:11:10.553 "compare_and_write": false, 00:11:10.553 "abort": false, 00:11:10.553 "seek_hole": false, 00:11:10.553 "seek_data": false, 00:11:10.553 "copy": false, 00:11:10.553 "nvme_iov_md": false 00:11:10.553 }, 00:11:10.553 "memory_domains": [ 00:11:10.553 { 00:11:10.553 "dma_device_id": "system", 00:11:10.553 "dma_device_type": 1 00:11:10.553 }, 00:11:10.553 { 00:11:10.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.553 "dma_device_type": 2 00:11:10.553 }, 00:11:10.553 { 00:11:10.553 "dma_device_id": "system", 00:11:10.553 "dma_device_type": 1 00:11:10.553 }, 00:11:10.553 { 00:11:10.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.553 "dma_device_type": 2 00:11:10.553 }, 00:11:10.553 { 00:11:10.553 "dma_device_id": "system", 00:11:10.553 "dma_device_type": 1 00:11:10.553 }, 00:11:10.553 { 00:11:10.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.553 "dma_device_type": 2 00:11:10.553 } 00:11:10.553 ], 00:11:10.553 "driver_specific": { 00:11:10.553 "raid": { 00:11:10.553 "uuid": "7595bfda-20b5-472f-aac0-0896e33a7fca", 00:11:10.553 "strip_size_kb": 64, 00:11:10.553 "state": "online", 00:11:10.553 "raid_level": "raid0", 00:11:10.553 "superblock": true, 00:11:10.553 "num_base_bdevs": 3, 00:11:10.553 "num_base_bdevs_discovered": 3, 00:11:10.553 "num_base_bdevs_operational": 3, 00:11:10.553 "base_bdevs_list": [ 00:11:10.553 { 00:11:10.553 "name": "pt1", 00:11:10.553 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:10.553 "is_configured": true, 00:11:10.553 "data_offset": 2048, 00:11:10.553 "data_size": 63488 00:11:10.553 }, 00:11:10.553 { 00:11:10.553 "name": "pt2", 00:11:10.553 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:10.553 "is_configured": true, 00:11:10.553 "data_offset": 2048, 00:11:10.553 "data_size": 63488 00:11:10.553 }, 00:11:10.553 { 00:11:10.553 "name": "pt3", 00:11:10.553 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:10.553 "is_configured": true, 00:11:10.553 "data_offset": 2048, 00:11:10.553 "data_size": 63488 00:11:10.553 } 00:11:10.553 ] 00:11:10.553 } 00:11:10.553 } 00:11:10.553 }' 00:11:10.553 16:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:10.553 16:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:10.553 pt2 00:11:10.553 pt3' 00:11:10.553 16:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.553 16:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:10.553 16:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.553 16:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:10.553 16:26:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.553 16:26:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.553 16:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.553 16:26:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.553 16:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.553 16:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.553 16:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.553 16:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:10.553 16:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.553 16:26:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.553 16:26:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.553 16:26:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.813 16:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.813 16:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.813 16:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.813 16:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:10.813 16:26:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.813 16:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.813 16:26:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.813 16:26:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.813 16:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.813 16:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.813 16:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:10.813 16:26:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.813 16:26:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.813 16:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:10.813 [2024-12-06 16:26:52.449385] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:10.813 16:26:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.813 16:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7595bfda-20b5-472f-aac0-0896e33a7fca '!=' 7595bfda-20b5-472f-aac0-0896e33a7fca ']' 00:11:10.813 16:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:11:10.813 16:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:10.813 16:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:10.813 16:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 76621 00:11:10.813 16:26:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 76621 ']' 00:11:10.813 16:26:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 76621 00:11:10.813 16:26:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:10.813 16:26:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:10.814 16:26:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76621 00:11:10.814 16:26:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:10.814 16:26:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:10.814 16:26:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76621' 00:11:10.814 killing process with pid 76621 00:11:10.814 16:26:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 76621 00:11:10.814 [2024-12-06 16:26:52.528238] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:10.814 16:26:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 76621 00:11:10.814 [2024-12-06 16:26:52.528445] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:10.814 [2024-12-06 16:26:52.528525] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:10.814 [2024-12-06 16:26:52.528536] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:11:10.814 [2024-12-06 16:26:52.564357] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:11.074 16:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:11.074 00:11:11.074 real 0m4.080s 00:11:11.074 user 0m6.478s 00:11:11.074 sys 0m0.852s 00:11:11.074 16:26:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:11.074 16:26:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.074 ************************************ 00:11:11.074 END TEST raid_superblock_test 00:11:11.074 ************************************ 00:11:11.074 16:26:52 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:11:11.074 16:26:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:11.074 16:26:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:11.074 16:26:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:11.074 ************************************ 00:11:11.074 START TEST raid_read_error_test 00:11:11.074 ************************************ 00:11:11.074 16:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:11:11.074 16:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:11.074 16:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:11.074 16:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:11.074 16:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:11.074 16:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:11.074 16:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:11.074 16:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:11.074 16:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:11.074 16:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:11.074 16:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:11.074 16:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:11.074 16:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:11.074 16:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:11.074 16:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:11.074 16:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:11.074 16:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:11.074 16:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:11.074 16:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:11.074 16:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:11.074 16:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:11.074 16:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:11.074 16:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:11.074 16:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:11.074 16:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:11.074 16:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:11.074 16:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ca0oW1rr4c 00:11:11.074 16:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76863 00:11:11.074 16:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:11.074 16:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76863 00:11:11.074 16:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 76863 ']' 00:11:11.074 16:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.074 16:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:11.074 16:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.074 16:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:11.074 16:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.334 [2024-12-06 16:26:52.963102] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:11:11.334 [2024-12-06 16:26:52.963243] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76863 ] 00:11:11.334 [2024-12-06 16:26:53.135984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.334 [2024-12-06 16:26:53.163435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.593 [2024-12-06 16:26:53.207595] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:11.593 [2024-12-06 16:26:53.207634] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:12.162 16:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:12.162 16:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:12.162 16:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:12.162 16:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:12.162 16:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.162 16:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.162 BaseBdev1_malloc 00:11:12.162 16:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.162 16:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:12.162 16:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.162 16:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.162 true 00:11:12.162 16:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.162 16:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:12.162 16:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.162 16:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.162 [2024-12-06 16:26:53.907954] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:12.162 [2024-12-06 16:26:53.908012] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.162 [2024-12-06 16:26:53.908033] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:12.162 [2024-12-06 16:26:53.908044] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.163 [2024-12-06 16:26:53.910316] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.163 [2024-12-06 16:26:53.910352] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:12.163 BaseBdev1 00:11:12.163 16:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.163 16:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:12.163 16:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:12.163 16:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.163 16:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.163 BaseBdev2_malloc 00:11:12.163 16:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.163 16:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:12.163 16:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.163 16:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.163 true 00:11:12.163 16:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.163 16:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:12.163 16:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.163 16:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.163 [2024-12-06 16:26:53.949020] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:12.163 [2024-12-06 16:26:53.949075] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.163 [2024-12-06 16:26:53.949093] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:12.163 [2024-12-06 16:26:53.949102] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.163 [2024-12-06 16:26:53.951350] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.163 [2024-12-06 16:26:53.951432] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:12.163 BaseBdev2 00:11:12.163 16:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.163 16:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:12.163 16:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:12.163 16:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.163 16:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.163 BaseBdev3_malloc 00:11:12.163 16:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.163 16:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:12.163 16:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.163 16:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.163 true 00:11:12.163 16:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.163 16:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:12.163 16:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.163 16:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.163 [2024-12-06 16:26:53.990456] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:12.163 [2024-12-06 16:26:53.990586] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.163 [2024-12-06 16:26:53.990626] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:12.163 [2024-12-06 16:26:53.990639] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.163 [2024-12-06 16:26:53.993030] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.163 [2024-12-06 16:26:53.993070] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:12.163 BaseBdev3 00:11:12.163 16:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.163 16:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:12.163 16:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.163 16:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.423 [2024-12-06 16:26:54.002507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:12.423 [2024-12-06 16:26:54.004540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:12.423 [2024-12-06 16:26:54.004627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:12.423 [2024-12-06 16:26:54.004818] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:11:12.423 [2024-12-06 16:26:54.004843] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:12.423 [2024-12-06 16:26:54.005121] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:12.423 [2024-12-06 16:26:54.005272] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:11:12.423 [2024-12-06 16:26:54.005289] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:11:12.423 [2024-12-06 16:26:54.005407] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:12.423 16:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.423 16:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:12.423 16:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:12.423 16:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:12.423 16:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:12.423 16:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.423 16:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:12.423 16:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.423 16:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.423 16:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.423 16:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.423 16:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.423 16:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:12.423 16:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.423 16:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.423 16:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.423 16:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.423 "name": "raid_bdev1", 00:11:12.423 "uuid": "079519d3-8ac0-4a04-94ad-5d78e7e50a0f", 00:11:12.423 "strip_size_kb": 64, 00:11:12.423 "state": "online", 00:11:12.423 "raid_level": "raid0", 00:11:12.423 "superblock": true, 00:11:12.423 "num_base_bdevs": 3, 00:11:12.423 "num_base_bdevs_discovered": 3, 00:11:12.423 "num_base_bdevs_operational": 3, 00:11:12.423 "base_bdevs_list": [ 00:11:12.423 { 00:11:12.423 "name": "BaseBdev1", 00:11:12.423 "uuid": "76409af4-c9d5-50b3-ba54-21c1cd998718", 00:11:12.423 "is_configured": true, 00:11:12.423 "data_offset": 2048, 00:11:12.423 "data_size": 63488 00:11:12.423 }, 00:11:12.423 { 00:11:12.423 "name": "BaseBdev2", 00:11:12.423 "uuid": "eb630a55-52c0-5586-9c39-d9ab1fc24c98", 00:11:12.423 "is_configured": true, 00:11:12.423 "data_offset": 2048, 00:11:12.423 "data_size": 63488 00:11:12.423 }, 00:11:12.423 { 00:11:12.423 "name": "BaseBdev3", 00:11:12.423 "uuid": "f7f7d302-9d13-5efc-8f20-e0c7fc6ce8ce", 00:11:12.423 "is_configured": true, 00:11:12.423 "data_offset": 2048, 00:11:12.423 "data_size": 63488 00:11:12.423 } 00:11:12.423 ] 00:11:12.423 }' 00:11:12.423 16:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.423 16:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.684 16:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:12.684 16:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:12.684 [2024-12-06 16:26:54.518019] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:13.622 16:26:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:13.622 16:26:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.622 16:26:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.622 16:26:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.622 16:26:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:13.622 16:26:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:13.622 16:26:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:13.622 16:26:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:13.622 16:26:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:13.622 16:26:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:13.622 16:26:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:13.622 16:26:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:13.622 16:26:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:13.622 16:26:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.622 16:26:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.622 16:26:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.622 16:26:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.622 16:26:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.622 16:26:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:13.622 16:26:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.622 16:26:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.881 16:26:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.881 16:26:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.881 "name": "raid_bdev1", 00:11:13.881 "uuid": "079519d3-8ac0-4a04-94ad-5d78e7e50a0f", 00:11:13.881 "strip_size_kb": 64, 00:11:13.881 "state": "online", 00:11:13.881 "raid_level": "raid0", 00:11:13.881 "superblock": true, 00:11:13.881 "num_base_bdevs": 3, 00:11:13.881 "num_base_bdevs_discovered": 3, 00:11:13.881 "num_base_bdevs_operational": 3, 00:11:13.881 "base_bdevs_list": [ 00:11:13.881 { 00:11:13.881 "name": "BaseBdev1", 00:11:13.881 "uuid": "76409af4-c9d5-50b3-ba54-21c1cd998718", 00:11:13.881 "is_configured": true, 00:11:13.881 "data_offset": 2048, 00:11:13.881 "data_size": 63488 00:11:13.881 }, 00:11:13.881 { 00:11:13.881 "name": "BaseBdev2", 00:11:13.881 "uuid": "eb630a55-52c0-5586-9c39-d9ab1fc24c98", 00:11:13.881 "is_configured": true, 00:11:13.881 "data_offset": 2048, 00:11:13.881 "data_size": 63488 00:11:13.881 }, 00:11:13.881 { 00:11:13.881 "name": "BaseBdev3", 00:11:13.881 "uuid": "f7f7d302-9d13-5efc-8f20-e0c7fc6ce8ce", 00:11:13.881 "is_configured": true, 00:11:13.881 "data_offset": 2048, 00:11:13.881 "data_size": 63488 00:11:13.881 } 00:11:13.881 ] 00:11:13.881 }' 00:11:13.881 16:26:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.881 16:26:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.142 16:26:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:14.142 16:26:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.142 16:26:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.142 [2024-12-06 16:26:55.814180] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:14.142 [2024-12-06 16:26:55.814230] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:14.142 [2024-12-06 16:26:55.817264] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:14.142 [2024-12-06 16:26:55.817319] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:14.142 [2024-12-06 16:26:55.817360] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:14.142 [2024-12-06 16:26:55.817373] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:11:14.142 { 00:11:14.142 "results": [ 00:11:14.142 { 00:11:14.142 "job": "raid_bdev1", 00:11:14.142 "core_mask": "0x1", 00:11:14.142 "workload": "randrw", 00:11:14.142 "percentage": 50, 00:11:14.142 "status": "finished", 00:11:14.142 "queue_depth": 1, 00:11:14.142 "io_size": 131072, 00:11:14.142 "runtime": 1.296557, 00:11:14.142 "iops": 15008.210205953152, 00:11:14.142 "mibps": 1876.026275744144, 00:11:14.142 "io_failed": 1, 00:11:14.142 "io_timeout": 0, 00:11:14.142 "avg_latency_us": 92.01521822841165, 00:11:14.142 "min_latency_us": 27.276855895196505, 00:11:14.142 "max_latency_us": 1445.2262008733624 00:11:14.142 } 00:11:14.142 ], 00:11:14.142 "core_count": 1 00:11:14.142 } 00:11:14.142 16:26:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.142 16:26:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76863 00:11:14.142 16:26:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 76863 ']' 00:11:14.142 16:26:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 76863 00:11:14.142 16:26:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:14.142 16:26:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:14.142 16:26:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76863 00:11:14.142 16:26:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:14.142 16:26:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:14.142 16:26:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76863' 00:11:14.142 killing process with pid 76863 00:11:14.142 16:26:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 76863 00:11:14.142 [2024-12-06 16:26:55.865366] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:14.142 16:26:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 76863 00:11:14.142 [2024-12-06 16:26:55.891247] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:14.406 16:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ca0oW1rr4c 00:11:14.406 16:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:14.406 16:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:14.406 16:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.77 00:11:14.406 16:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:14.406 16:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:14.406 16:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:14.406 16:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.77 != \0\.\0\0 ]] 00:11:14.406 00:11:14.406 real 0m3.252s 00:11:14.406 user 0m4.128s 00:11:14.406 sys 0m0.537s 00:11:14.406 16:26:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:14.406 16:26:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.406 ************************************ 00:11:14.406 END TEST raid_read_error_test 00:11:14.406 ************************************ 00:11:14.406 16:26:56 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:11:14.406 16:26:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:14.406 16:26:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:14.406 16:26:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:14.406 ************************************ 00:11:14.406 START TEST raid_write_error_test 00:11:14.406 ************************************ 00:11:14.406 16:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:11:14.406 16:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:14.406 16:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:14.406 16:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:14.406 16:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:14.406 16:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:14.406 16:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:14.406 16:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:14.406 16:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:14.406 16:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:14.406 16:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:14.406 16:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:14.406 16:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:14.406 16:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:14.406 16:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:14.406 16:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:14.406 16:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:14.406 16:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:14.406 16:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:14.406 16:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:14.406 16:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:14.406 16:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:14.406 16:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:14.406 16:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:14.406 16:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:14.406 16:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:14.406 16:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ErVB86xMCd 00:11:14.406 16:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76992 00:11:14.406 16:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76992 00:11:14.406 16:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:14.406 16:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 76992 ']' 00:11:14.406 16:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.406 16:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:14.406 16:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.407 16:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:14.407 16:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.666 [2024-12-06 16:26:56.280871] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:11:14.666 [2024-12-06 16:26:56.281003] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76992 ] 00:11:14.666 [2024-12-06 16:26:56.452574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.666 [2024-12-06 16:26:56.480939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.926 [2024-12-06 16:26:56.524168] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:14.926 [2024-12-06 16:26:56.524209] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:15.496 16:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:15.496 16:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:15.496 16:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:15.496 16:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:15.496 16:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.496 16:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.496 BaseBdev1_malloc 00:11:15.496 16:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.496 16:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:15.496 16:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.496 16:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.496 true 00:11:15.496 16:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.496 16:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:15.496 16:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.496 16:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.496 [2024-12-06 16:26:57.179941] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:15.496 [2024-12-06 16:26:57.180084] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.496 [2024-12-06 16:26:57.180138] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:15.497 [2024-12-06 16:26:57.180181] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.497 [2024-12-06 16:26:57.182510] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.497 [2024-12-06 16:26:57.182581] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:15.497 BaseBdev1 00:11:15.497 16:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.497 16:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:15.497 16:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:15.497 16:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.497 16:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.497 BaseBdev2_malloc 00:11:15.497 16:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.497 16:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:15.497 16:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.497 16:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.497 true 00:11:15.497 16:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.497 16:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:15.497 16:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.497 16:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.497 [2024-12-06 16:26:57.220533] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:15.497 [2024-12-06 16:26:57.220642] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.497 [2024-12-06 16:26:57.220684] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:15.497 [2024-12-06 16:26:57.220740] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.497 [2024-12-06 16:26:57.223060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.497 [2024-12-06 16:26:57.223132] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:15.497 BaseBdev2 00:11:15.497 16:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.497 16:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:15.497 16:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:15.497 16:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.497 16:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.497 BaseBdev3_malloc 00:11:15.497 16:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.497 16:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:15.497 16:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.497 16:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.497 true 00:11:15.497 16:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.497 16:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:15.497 16:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.497 16:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.497 [2024-12-06 16:26:57.261333] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:15.497 [2024-12-06 16:26:57.261382] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.497 [2024-12-06 16:26:57.261416] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:15.497 [2024-12-06 16:26:57.261425] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.497 [2024-12-06 16:26:57.263621] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.497 [2024-12-06 16:26:57.263657] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:15.497 BaseBdev3 00:11:15.497 16:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.497 16:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:15.497 16:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.497 16:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.497 [2024-12-06 16:26:57.273386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:15.497 [2024-12-06 16:26:57.275353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:15.497 [2024-12-06 16:26:57.275471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:15.497 [2024-12-06 16:26:57.275669] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:11:15.497 [2024-12-06 16:26:57.275727] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:15.497 [2024-12-06 16:26:57.276019] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:15.497 [2024-12-06 16:26:57.276237] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:11:15.497 [2024-12-06 16:26:57.276285] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:11:15.497 [2024-12-06 16:26:57.276469] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:15.497 16:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.497 16:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:15.497 16:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:15.497 16:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:15.497 16:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:15.497 16:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.497 16:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:15.497 16:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.497 16:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.497 16:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.497 16:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.497 16:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.497 16:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.497 16:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.497 16:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:15.497 16:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.756 16:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.756 "name": "raid_bdev1", 00:11:15.756 "uuid": "a014cb17-c48f-48af-afec-d7a12f7eeb11", 00:11:15.756 "strip_size_kb": 64, 00:11:15.756 "state": "online", 00:11:15.756 "raid_level": "raid0", 00:11:15.756 "superblock": true, 00:11:15.756 "num_base_bdevs": 3, 00:11:15.756 "num_base_bdevs_discovered": 3, 00:11:15.756 "num_base_bdevs_operational": 3, 00:11:15.756 "base_bdevs_list": [ 00:11:15.756 { 00:11:15.756 "name": "BaseBdev1", 00:11:15.756 "uuid": "ee9c7d8c-d5c5-5319-a6c8-7eebf7ac198e", 00:11:15.756 "is_configured": true, 00:11:15.756 "data_offset": 2048, 00:11:15.756 "data_size": 63488 00:11:15.756 }, 00:11:15.756 { 00:11:15.756 "name": "BaseBdev2", 00:11:15.756 "uuid": "1e2781e3-084c-5d55-8a5a-96e2432e15f3", 00:11:15.756 "is_configured": true, 00:11:15.756 "data_offset": 2048, 00:11:15.756 "data_size": 63488 00:11:15.756 }, 00:11:15.756 { 00:11:15.756 "name": "BaseBdev3", 00:11:15.756 "uuid": "fc9f4067-a23b-580e-9372-4c881492a57a", 00:11:15.756 "is_configured": true, 00:11:15.756 "data_offset": 2048, 00:11:15.756 "data_size": 63488 00:11:15.756 } 00:11:15.756 ] 00:11:15.756 }' 00:11:15.756 16:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.756 16:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.017 16:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:16.017 16:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:16.017 [2024-12-06 16:26:57.840757] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:16.956 16:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:16.956 16:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.956 16:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.956 16:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.956 16:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:16.956 16:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:16.956 16:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:16.956 16:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:16.956 16:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:16.956 16:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:16.956 16:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:16.956 16:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.956 16:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:16.957 16:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.957 16:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.957 16:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.957 16:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.957 16:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.957 16:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:16.957 16:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.957 16:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.957 16:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.216 16:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.217 "name": "raid_bdev1", 00:11:17.217 "uuid": "a014cb17-c48f-48af-afec-d7a12f7eeb11", 00:11:17.217 "strip_size_kb": 64, 00:11:17.217 "state": "online", 00:11:17.217 "raid_level": "raid0", 00:11:17.217 "superblock": true, 00:11:17.217 "num_base_bdevs": 3, 00:11:17.217 "num_base_bdevs_discovered": 3, 00:11:17.217 "num_base_bdevs_operational": 3, 00:11:17.217 "base_bdevs_list": [ 00:11:17.217 { 00:11:17.217 "name": "BaseBdev1", 00:11:17.217 "uuid": "ee9c7d8c-d5c5-5319-a6c8-7eebf7ac198e", 00:11:17.217 "is_configured": true, 00:11:17.217 "data_offset": 2048, 00:11:17.217 "data_size": 63488 00:11:17.217 }, 00:11:17.217 { 00:11:17.217 "name": "BaseBdev2", 00:11:17.217 "uuid": "1e2781e3-084c-5d55-8a5a-96e2432e15f3", 00:11:17.217 "is_configured": true, 00:11:17.217 "data_offset": 2048, 00:11:17.217 "data_size": 63488 00:11:17.217 }, 00:11:17.217 { 00:11:17.217 "name": "BaseBdev3", 00:11:17.217 "uuid": "fc9f4067-a23b-580e-9372-4c881492a57a", 00:11:17.217 "is_configured": true, 00:11:17.217 "data_offset": 2048, 00:11:17.217 "data_size": 63488 00:11:17.217 } 00:11:17.217 ] 00:11:17.217 }' 00:11:17.217 16:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.217 16:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.476 16:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:17.476 16:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.476 16:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.476 [2024-12-06 16:26:59.241404] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:17.476 [2024-12-06 16:26:59.241484] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:17.476 [2024-12-06 16:26:59.244475] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:17.476 [2024-12-06 16:26:59.244581] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:17.476 [2024-12-06 16:26:59.244644] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:17.476 [2024-12-06 16:26:59.244697] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:11:17.476 { 00:11:17.476 "results": [ 00:11:17.476 { 00:11:17.476 "job": "raid_bdev1", 00:11:17.476 "core_mask": "0x1", 00:11:17.476 "workload": "randrw", 00:11:17.476 "percentage": 50, 00:11:17.476 "status": "finished", 00:11:17.477 "queue_depth": 1, 00:11:17.477 "io_size": 131072, 00:11:17.477 "runtime": 1.401615, 00:11:17.477 "iops": 15471.438305098047, 00:11:17.477 "mibps": 1933.9297881372559, 00:11:17.477 "io_failed": 1, 00:11:17.477 "io_timeout": 0, 00:11:17.477 "avg_latency_us": 89.34898968887822, 00:11:17.477 "min_latency_us": 27.276855895196505, 00:11:17.477 "max_latency_us": 1452.380786026201 00:11:17.477 } 00:11:17.477 ], 00:11:17.477 "core_count": 1 00:11:17.477 } 00:11:17.477 16:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.477 16:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76992 00:11:17.477 16:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 76992 ']' 00:11:17.477 16:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 76992 00:11:17.477 16:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:17.477 16:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:17.477 16:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76992 00:11:17.477 killing process with pid 76992 00:11:17.477 16:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:17.477 16:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:17.477 16:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76992' 00:11:17.477 16:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 76992 00:11:17.477 [2024-12-06 16:26:59.290068] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:17.477 16:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 76992 00:11:17.736 [2024-12-06 16:26:59.316173] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:17.736 16:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:17.736 16:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ErVB86xMCd 00:11:17.736 16:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:17.736 16:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:11:17.736 16:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:17.736 16:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:17.736 16:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:17.736 16:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:11:17.736 00:11:17.736 real 0m3.352s 00:11:17.736 user 0m4.301s 00:11:17.736 sys 0m0.541s 00:11:17.736 16:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:17.736 ************************************ 00:11:17.736 END TEST raid_write_error_test 00:11:17.736 ************************************ 00:11:17.736 16:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.995 16:26:59 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:17.995 16:26:59 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:11:17.995 16:26:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:17.995 16:26:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:17.995 16:26:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:17.995 ************************************ 00:11:17.995 START TEST raid_state_function_test 00:11:17.995 ************************************ 00:11:17.995 16:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:11:17.995 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:17.995 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:17.995 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:17.995 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:17.995 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:17.995 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:17.996 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:17.996 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:17.996 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:17.996 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:17.996 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:17.996 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:17.996 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:17.996 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:17.996 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:17.996 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:17.996 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:17.996 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:17.996 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:17.996 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:17.996 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:17.996 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:17.996 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:17.996 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:17.996 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:17.996 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:17.996 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=77120 00:11:17.996 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:17.996 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 77120' 00:11:17.996 Process raid pid: 77120 00:11:17.996 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 77120 00:11:17.996 16:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 77120 ']' 00:11:17.996 16:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.996 16:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:17.996 16:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.996 16:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:17.996 16:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.996 [2024-12-06 16:26:59.707071] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:11:17.996 [2024-12-06 16:26:59.707329] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:18.254 [2024-12-06 16:26:59.880842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.254 [2024-12-06 16:26:59.911390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.254 [2024-12-06 16:26:59.956733] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:18.254 [2024-12-06 16:26:59.956859] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:18.821 16:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:18.821 16:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:18.821 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:18.821 16:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.821 16:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.821 [2024-12-06 16:27:00.560877] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:18.821 [2024-12-06 16:27:00.560997] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:18.821 [2024-12-06 16:27:00.561049] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:18.821 [2024-12-06 16:27:00.561074] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:18.821 [2024-12-06 16:27:00.561105] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:18.821 [2024-12-06 16:27:00.561137] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:18.821 16:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.821 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:18.821 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.821 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.821 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:18.821 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.821 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:18.821 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.821 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.821 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.821 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.821 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.821 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.821 16:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.821 16:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.821 16:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.821 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.821 "name": "Existed_Raid", 00:11:18.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.821 "strip_size_kb": 64, 00:11:18.821 "state": "configuring", 00:11:18.821 "raid_level": "concat", 00:11:18.821 "superblock": false, 00:11:18.821 "num_base_bdevs": 3, 00:11:18.821 "num_base_bdevs_discovered": 0, 00:11:18.821 "num_base_bdevs_operational": 3, 00:11:18.821 "base_bdevs_list": [ 00:11:18.821 { 00:11:18.821 "name": "BaseBdev1", 00:11:18.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.821 "is_configured": false, 00:11:18.821 "data_offset": 0, 00:11:18.821 "data_size": 0 00:11:18.821 }, 00:11:18.821 { 00:11:18.821 "name": "BaseBdev2", 00:11:18.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.821 "is_configured": false, 00:11:18.821 "data_offset": 0, 00:11:18.821 "data_size": 0 00:11:18.821 }, 00:11:18.821 { 00:11:18.821 "name": "BaseBdev3", 00:11:18.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.821 "is_configured": false, 00:11:18.821 "data_offset": 0, 00:11:18.821 "data_size": 0 00:11:18.821 } 00:11:18.821 ] 00:11:18.821 }' 00:11:18.821 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.821 16:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.389 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:19.389 16:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.389 16:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.389 [2024-12-06 16:27:00.992069] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:19.389 [2024-12-06 16:27:00.992170] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:11:19.389 16:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.389 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:19.389 16:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.389 16:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.389 [2024-12-06 16:27:01.004033] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:19.389 [2024-12-06 16:27:01.004123] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:19.389 [2024-12-06 16:27:01.004175] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:19.389 [2024-12-06 16:27:01.004241] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:19.389 [2024-12-06 16:27:01.004287] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:19.389 [2024-12-06 16:27:01.004316] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:19.389 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.389 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:19.389 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.389 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.389 [2024-12-06 16:27:01.025326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:19.389 BaseBdev1 00:11:19.389 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.389 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:19.389 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:19.389 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:19.389 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:19.389 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:19.389 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:19.389 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:19.389 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.389 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.389 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.389 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:19.389 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.389 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.389 [ 00:11:19.389 { 00:11:19.389 "name": "BaseBdev1", 00:11:19.389 "aliases": [ 00:11:19.389 "c99d8b65-beab-4be5-a796-e525be1ae29d" 00:11:19.389 ], 00:11:19.389 "product_name": "Malloc disk", 00:11:19.389 "block_size": 512, 00:11:19.389 "num_blocks": 65536, 00:11:19.389 "uuid": "c99d8b65-beab-4be5-a796-e525be1ae29d", 00:11:19.389 "assigned_rate_limits": { 00:11:19.389 "rw_ios_per_sec": 0, 00:11:19.389 "rw_mbytes_per_sec": 0, 00:11:19.389 "r_mbytes_per_sec": 0, 00:11:19.389 "w_mbytes_per_sec": 0 00:11:19.389 }, 00:11:19.389 "claimed": true, 00:11:19.389 "claim_type": "exclusive_write", 00:11:19.389 "zoned": false, 00:11:19.389 "supported_io_types": { 00:11:19.389 "read": true, 00:11:19.389 "write": true, 00:11:19.389 "unmap": true, 00:11:19.389 "flush": true, 00:11:19.389 "reset": true, 00:11:19.389 "nvme_admin": false, 00:11:19.389 "nvme_io": false, 00:11:19.389 "nvme_io_md": false, 00:11:19.389 "write_zeroes": true, 00:11:19.389 "zcopy": true, 00:11:19.389 "get_zone_info": false, 00:11:19.389 "zone_management": false, 00:11:19.389 "zone_append": false, 00:11:19.389 "compare": false, 00:11:19.389 "compare_and_write": false, 00:11:19.389 "abort": true, 00:11:19.389 "seek_hole": false, 00:11:19.389 "seek_data": false, 00:11:19.389 "copy": true, 00:11:19.389 "nvme_iov_md": false 00:11:19.389 }, 00:11:19.389 "memory_domains": [ 00:11:19.389 { 00:11:19.389 "dma_device_id": "system", 00:11:19.389 "dma_device_type": 1 00:11:19.389 }, 00:11:19.389 { 00:11:19.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.389 "dma_device_type": 2 00:11:19.389 } 00:11:19.389 ], 00:11:19.389 "driver_specific": {} 00:11:19.389 } 00:11:19.389 ] 00:11:19.389 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.389 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:19.389 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:19.389 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.389 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.389 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:19.389 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.389 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:19.389 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.389 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.389 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.389 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.389 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.389 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.389 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.389 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.389 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.389 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.389 "name": "Existed_Raid", 00:11:19.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.389 "strip_size_kb": 64, 00:11:19.389 "state": "configuring", 00:11:19.389 "raid_level": "concat", 00:11:19.389 "superblock": false, 00:11:19.389 "num_base_bdevs": 3, 00:11:19.389 "num_base_bdevs_discovered": 1, 00:11:19.389 "num_base_bdevs_operational": 3, 00:11:19.389 "base_bdevs_list": [ 00:11:19.389 { 00:11:19.389 "name": "BaseBdev1", 00:11:19.389 "uuid": "c99d8b65-beab-4be5-a796-e525be1ae29d", 00:11:19.389 "is_configured": true, 00:11:19.389 "data_offset": 0, 00:11:19.389 "data_size": 65536 00:11:19.389 }, 00:11:19.389 { 00:11:19.389 "name": "BaseBdev2", 00:11:19.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.389 "is_configured": false, 00:11:19.389 "data_offset": 0, 00:11:19.389 "data_size": 0 00:11:19.389 }, 00:11:19.389 { 00:11:19.389 "name": "BaseBdev3", 00:11:19.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.389 "is_configured": false, 00:11:19.389 "data_offset": 0, 00:11:19.389 "data_size": 0 00:11:19.389 } 00:11:19.389 ] 00:11:19.389 }' 00:11:19.389 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.389 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.955 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:19.955 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.955 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.955 [2024-12-06 16:27:01.500590] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:19.955 [2024-12-06 16:27:01.500688] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:11:19.955 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.955 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:19.955 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.955 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.955 [2024-12-06 16:27:01.512577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:19.955 [2024-12-06 16:27:01.514544] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:19.955 [2024-12-06 16:27:01.514623] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:19.955 [2024-12-06 16:27:01.514692] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:19.955 [2024-12-06 16:27:01.514718] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:19.955 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.955 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:19.955 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:19.955 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:19.955 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.955 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.955 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:19.955 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.955 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:19.955 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.955 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.955 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.955 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.955 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.955 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.955 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.955 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.955 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.955 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.955 "name": "Existed_Raid", 00:11:19.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.955 "strip_size_kb": 64, 00:11:19.955 "state": "configuring", 00:11:19.955 "raid_level": "concat", 00:11:19.955 "superblock": false, 00:11:19.955 "num_base_bdevs": 3, 00:11:19.955 "num_base_bdevs_discovered": 1, 00:11:19.955 "num_base_bdevs_operational": 3, 00:11:19.955 "base_bdevs_list": [ 00:11:19.955 { 00:11:19.955 "name": "BaseBdev1", 00:11:19.955 "uuid": "c99d8b65-beab-4be5-a796-e525be1ae29d", 00:11:19.955 "is_configured": true, 00:11:19.955 "data_offset": 0, 00:11:19.955 "data_size": 65536 00:11:19.955 }, 00:11:19.955 { 00:11:19.955 "name": "BaseBdev2", 00:11:19.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.955 "is_configured": false, 00:11:19.955 "data_offset": 0, 00:11:19.955 "data_size": 0 00:11:19.955 }, 00:11:19.955 { 00:11:19.955 "name": "BaseBdev3", 00:11:19.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.955 "is_configured": false, 00:11:19.955 "data_offset": 0, 00:11:19.955 "data_size": 0 00:11:19.955 } 00:11:19.955 ] 00:11:19.955 }' 00:11:19.955 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.955 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.214 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:20.214 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.214 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.214 [2024-12-06 16:27:01.991292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:20.214 BaseBdev2 00:11:20.214 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.214 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:20.214 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:20.214 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:20.214 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:20.214 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:20.214 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:20.214 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:20.214 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.214 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.214 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.214 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:20.214 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.214 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.214 [ 00:11:20.214 { 00:11:20.214 "name": "BaseBdev2", 00:11:20.214 "aliases": [ 00:11:20.214 "01f4e897-13c9-4ec4-be0c-2e68ef145277" 00:11:20.214 ], 00:11:20.214 "product_name": "Malloc disk", 00:11:20.214 "block_size": 512, 00:11:20.214 "num_blocks": 65536, 00:11:20.214 "uuid": "01f4e897-13c9-4ec4-be0c-2e68ef145277", 00:11:20.214 "assigned_rate_limits": { 00:11:20.214 "rw_ios_per_sec": 0, 00:11:20.214 "rw_mbytes_per_sec": 0, 00:11:20.214 "r_mbytes_per_sec": 0, 00:11:20.214 "w_mbytes_per_sec": 0 00:11:20.214 }, 00:11:20.214 "claimed": true, 00:11:20.214 "claim_type": "exclusive_write", 00:11:20.214 "zoned": false, 00:11:20.214 "supported_io_types": { 00:11:20.214 "read": true, 00:11:20.214 "write": true, 00:11:20.214 "unmap": true, 00:11:20.214 "flush": true, 00:11:20.214 "reset": true, 00:11:20.214 "nvme_admin": false, 00:11:20.214 "nvme_io": false, 00:11:20.214 "nvme_io_md": false, 00:11:20.214 "write_zeroes": true, 00:11:20.214 "zcopy": true, 00:11:20.214 "get_zone_info": false, 00:11:20.214 "zone_management": false, 00:11:20.214 "zone_append": false, 00:11:20.214 "compare": false, 00:11:20.214 "compare_and_write": false, 00:11:20.214 "abort": true, 00:11:20.214 "seek_hole": false, 00:11:20.214 "seek_data": false, 00:11:20.214 "copy": true, 00:11:20.214 "nvme_iov_md": false 00:11:20.214 }, 00:11:20.214 "memory_domains": [ 00:11:20.214 { 00:11:20.214 "dma_device_id": "system", 00:11:20.214 "dma_device_type": 1 00:11:20.214 }, 00:11:20.214 { 00:11:20.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.214 "dma_device_type": 2 00:11:20.214 } 00:11:20.214 ], 00:11:20.214 "driver_specific": {} 00:11:20.214 } 00:11:20.214 ] 00:11:20.214 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.214 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:20.214 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:20.214 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:20.214 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:20.214 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.214 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.214 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:20.214 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.214 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:20.214 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.214 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.214 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.214 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.214 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.214 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.214 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.214 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.214 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.473 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.473 "name": "Existed_Raid", 00:11:20.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.473 "strip_size_kb": 64, 00:11:20.473 "state": "configuring", 00:11:20.473 "raid_level": "concat", 00:11:20.473 "superblock": false, 00:11:20.473 "num_base_bdevs": 3, 00:11:20.473 "num_base_bdevs_discovered": 2, 00:11:20.473 "num_base_bdevs_operational": 3, 00:11:20.473 "base_bdevs_list": [ 00:11:20.473 { 00:11:20.473 "name": "BaseBdev1", 00:11:20.473 "uuid": "c99d8b65-beab-4be5-a796-e525be1ae29d", 00:11:20.473 "is_configured": true, 00:11:20.473 "data_offset": 0, 00:11:20.473 "data_size": 65536 00:11:20.473 }, 00:11:20.473 { 00:11:20.473 "name": "BaseBdev2", 00:11:20.473 "uuid": "01f4e897-13c9-4ec4-be0c-2e68ef145277", 00:11:20.473 "is_configured": true, 00:11:20.473 "data_offset": 0, 00:11:20.473 "data_size": 65536 00:11:20.473 }, 00:11:20.473 { 00:11:20.473 "name": "BaseBdev3", 00:11:20.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.473 "is_configured": false, 00:11:20.473 "data_offset": 0, 00:11:20.473 "data_size": 0 00:11:20.473 } 00:11:20.473 ] 00:11:20.473 }' 00:11:20.473 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.473 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.731 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:20.731 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.731 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.731 [2024-12-06 16:27:02.461758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:20.731 BaseBdev3 00:11:20.731 [2024-12-06 16:27:02.461921] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:11:20.731 [2024-12-06 16:27:02.461962] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:20.731 [2024-12-06 16:27:02.462375] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:11:20.731 [2024-12-06 16:27:02.462603] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:11:20.731 [2024-12-06 16:27:02.462625] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:11:20.731 [2024-12-06 16:27:02.462895] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:20.731 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.731 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:20.731 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:20.731 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:20.731 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:20.731 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:20.731 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:20.731 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:20.731 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.731 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.731 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.731 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:20.731 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.731 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.731 [ 00:11:20.731 { 00:11:20.731 "name": "BaseBdev3", 00:11:20.731 "aliases": [ 00:11:20.731 "5158e36a-6feb-4523-9c53-5d4f2e7a4c48" 00:11:20.731 ], 00:11:20.731 "product_name": "Malloc disk", 00:11:20.731 "block_size": 512, 00:11:20.731 "num_blocks": 65536, 00:11:20.731 "uuid": "5158e36a-6feb-4523-9c53-5d4f2e7a4c48", 00:11:20.731 "assigned_rate_limits": { 00:11:20.731 "rw_ios_per_sec": 0, 00:11:20.731 "rw_mbytes_per_sec": 0, 00:11:20.731 "r_mbytes_per_sec": 0, 00:11:20.731 "w_mbytes_per_sec": 0 00:11:20.731 }, 00:11:20.731 "claimed": true, 00:11:20.731 "claim_type": "exclusive_write", 00:11:20.731 "zoned": false, 00:11:20.731 "supported_io_types": { 00:11:20.731 "read": true, 00:11:20.731 "write": true, 00:11:20.731 "unmap": true, 00:11:20.731 "flush": true, 00:11:20.731 "reset": true, 00:11:20.731 "nvme_admin": false, 00:11:20.731 "nvme_io": false, 00:11:20.731 "nvme_io_md": false, 00:11:20.731 "write_zeroes": true, 00:11:20.731 "zcopy": true, 00:11:20.731 "get_zone_info": false, 00:11:20.731 "zone_management": false, 00:11:20.731 "zone_append": false, 00:11:20.731 "compare": false, 00:11:20.731 "compare_and_write": false, 00:11:20.731 "abort": true, 00:11:20.731 "seek_hole": false, 00:11:20.731 "seek_data": false, 00:11:20.731 "copy": true, 00:11:20.731 "nvme_iov_md": false 00:11:20.731 }, 00:11:20.731 "memory_domains": [ 00:11:20.731 { 00:11:20.731 "dma_device_id": "system", 00:11:20.731 "dma_device_type": 1 00:11:20.731 }, 00:11:20.731 { 00:11:20.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.731 "dma_device_type": 2 00:11:20.731 } 00:11:20.731 ], 00:11:20.731 "driver_specific": {} 00:11:20.731 } 00:11:20.731 ] 00:11:20.731 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.732 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:20.732 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:20.732 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:20.732 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:20.732 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.732 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:20.732 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:20.732 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.732 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:20.732 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.732 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.732 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.732 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.732 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.732 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.732 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.732 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.732 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.732 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.732 "name": "Existed_Raid", 00:11:20.732 "uuid": "5ce724db-7eb6-423f-a4f3-0ba3ffbc189f", 00:11:20.732 "strip_size_kb": 64, 00:11:20.732 "state": "online", 00:11:20.732 "raid_level": "concat", 00:11:20.732 "superblock": false, 00:11:20.732 "num_base_bdevs": 3, 00:11:20.732 "num_base_bdevs_discovered": 3, 00:11:20.732 "num_base_bdevs_operational": 3, 00:11:20.732 "base_bdevs_list": [ 00:11:20.732 { 00:11:20.732 "name": "BaseBdev1", 00:11:20.732 "uuid": "c99d8b65-beab-4be5-a796-e525be1ae29d", 00:11:20.732 "is_configured": true, 00:11:20.732 "data_offset": 0, 00:11:20.732 "data_size": 65536 00:11:20.732 }, 00:11:20.732 { 00:11:20.732 "name": "BaseBdev2", 00:11:20.732 "uuid": "01f4e897-13c9-4ec4-be0c-2e68ef145277", 00:11:20.732 "is_configured": true, 00:11:20.732 "data_offset": 0, 00:11:20.732 "data_size": 65536 00:11:20.732 }, 00:11:20.732 { 00:11:20.732 "name": "BaseBdev3", 00:11:20.732 "uuid": "5158e36a-6feb-4523-9c53-5d4f2e7a4c48", 00:11:20.732 "is_configured": true, 00:11:20.732 "data_offset": 0, 00:11:20.732 "data_size": 65536 00:11:20.732 } 00:11:20.732 ] 00:11:20.732 }' 00:11:20.732 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.732 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.299 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:21.299 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:21.299 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:21.299 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:21.299 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:21.299 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:21.299 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:21.299 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.299 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.299 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:21.300 [2024-12-06 16:27:02.945398] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:21.300 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.300 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:21.300 "name": "Existed_Raid", 00:11:21.300 "aliases": [ 00:11:21.300 "5ce724db-7eb6-423f-a4f3-0ba3ffbc189f" 00:11:21.300 ], 00:11:21.300 "product_name": "Raid Volume", 00:11:21.300 "block_size": 512, 00:11:21.300 "num_blocks": 196608, 00:11:21.300 "uuid": "5ce724db-7eb6-423f-a4f3-0ba3ffbc189f", 00:11:21.300 "assigned_rate_limits": { 00:11:21.300 "rw_ios_per_sec": 0, 00:11:21.300 "rw_mbytes_per_sec": 0, 00:11:21.300 "r_mbytes_per_sec": 0, 00:11:21.300 "w_mbytes_per_sec": 0 00:11:21.300 }, 00:11:21.300 "claimed": false, 00:11:21.300 "zoned": false, 00:11:21.300 "supported_io_types": { 00:11:21.300 "read": true, 00:11:21.300 "write": true, 00:11:21.300 "unmap": true, 00:11:21.300 "flush": true, 00:11:21.300 "reset": true, 00:11:21.300 "nvme_admin": false, 00:11:21.300 "nvme_io": false, 00:11:21.300 "nvme_io_md": false, 00:11:21.300 "write_zeroes": true, 00:11:21.300 "zcopy": false, 00:11:21.300 "get_zone_info": false, 00:11:21.300 "zone_management": false, 00:11:21.300 "zone_append": false, 00:11:21.300 "compare": false, 00:11:21.300 "compare_and_write": false, 00:11:21.300 "abort": false, 00:11:21.300 "seek_hole": false, 00:11:21.300 "seek_data": false, 00:11:21.300 "copy": false, 00:11:21.300 "nvme_iov_md": false 00:11:21.300 }, 00:11:21.300 "memory_domains": [ 00:11:21.300 { 00:11:21.300 "dma_device_id": "system", 00:11:21.300 "dma_device_type": 1 00:11:21.300 }, 00:11:21.300 { 00:11:21.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.300 "dma_device_type": 2 00:11:21.300 }, 00:11:21.300 { 00:11:21.300 "dma_device_id": "system", 00:11:21.300 "dma_device_type": 1 00:11:21.300 }, 00:11:21.300 { 00:11:21.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.300 "dma_device_type": 2 00:11:21.300 }, 00:11:21.300 { 00:11:21.300 "dma_device_id": "system", 00:11:21.300 "dma_device_type": 1 00:11:21.300 }, 00:11:21.300 { 00:11:21.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.300 "dma_device_type": 2 00:11:21.300 } 00:11:21.300 ], 00:11:21.300 "driver_specific": { 00:11:21.300 "raid": { 00:11:21.300 "uuid": "5ce724db-7eb6-423f-a4f3-0ba3ffbc189f", 00:11:21.300 "strip_size_kb": 64, 00:11:21.300 "state": "online", 00:11:21.300 "raid_level": "concat", 00:11:21.300 "superblock": false, 00:11:21.300 "num_base_bdevs": 3, 00:11:21.300 "num_base_bdevs_discovered": 3, 00:11:21.300 "num_base_bdevs_operational": 3, 00:11:21.300 "base_bdevs_list": [ 00:11:21.300 { 00:11:21.300 "name": "BaseBdev1", 00:11:21.300 "uuid": "c99d8b65-beab-4be5-a796-e525be1ae29d", 00:11:21.300 "is_configured": true, 00:11:21.300 "data_offset": 0, 00:11:21.300 "data_size": 65536 00:11:21.300 }, 00:11:21.300 { 00:11:21.300 "name": "BaseBdev2", 00:11:21.300 "uuid": "01f4e897-13c9-4ec4-be0c-2e68ef145277", 00:11:21.300 "is_configured": true, 00:11:21.300 "data_offset": 0, 00:11:21.300 "data_size": 65536 00:11:21.300 }, 00:11:21.300 { 00:11:21.300 "name": "BaseBdev3", 00:11:21.300 "uuid": "5158e36a-6feb-4523-9c53-5d4f2e7a4c48", 00:11:21.300 "is_configured": true, 00:11:21.300 "data_offset": 0, 00:11:21.300 "data_size": 65536 00:11:21.300 } 00:11:21.300 ] 00:11:21.300 } 00:11:21.300 } 00:11:21.300 }' 00:11:21.300 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:21.300 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:21.300 BaseBdev2 00:11:21.300 BaseBdev3' 00:11:21.300 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.300 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:21.300 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.300 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:21.300 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.300 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.300 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.300 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.300 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.300 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.300 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.300 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.300 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:21.300 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.300 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.560 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.560 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.560 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.560 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.560 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:21.560 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.560 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.560 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.560 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.560 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.560 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.560 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:21.560 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.560 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.560 [2024-12-06 16:27:03.216641] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:21.560 [2024-12-06 16:27:03.216687] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:21.560 [2024-12-06 16:27:03.216758] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:21.560 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.560 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:21.560 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:21.560 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:21.560 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:21.560 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:21.560 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:11:21.560 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.560 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:21.560 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:21.560 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.560 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:21.560 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.560 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.560 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.560 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.561 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.561 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.561 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.561 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.561 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.561 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.561 "name": "Existed_Raid", 00:11:21.561 "uuid": "5ce724db-7eb6-423f-a4f3-0ba3ffbc189f", 00:11:21.561 "strip_size_kb": 64, 00:11:21.561 "state": "offline", 00:11:21.561 "raid_level": "concat", 00:11:21.561 "superblock": false, 00:11:21.561 "num_base_bdevs": 3, 00:11:21.561 "num_base_bdevs_discovered": 2, 00:11:21.561 "num_base_bdevs_operational": 2, 00:11:21.561 "base_bdevs_list": [ 00:11:21.561 { 00:11:21.561 "name": null, 00:11:21.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.561 "is_configured": false, 00:11:21.561 "data_offset": 0, 00:11:21.561 "data_size": 65536 00:11:21.561 }, 00:11:21.561 { 00:11:21.561 "name": "BaseBdev2", 00:11:21.561 "uuid": "01f4e897-13c9-4ec4-be0c-2e68ef145277", 00:11:21.561 "is_configured": true, 00:11:21.561 "data_offset": 0, 00:11:21.561 "data_size": 65536 00:11:21.561 }, 00:11:21.561 { 00:11:21.561 "name": "BaseBdev3", 00:11:21.561 "uuid": "5158e36a-6feb-4523-9c53-5d4f2e7a4c48", 00:11:21.561 "is_configured": true, 00:11:21.561 "data_offset": 0, 00:11:21.561 "data_size": 65536 00:11:21.561 } 00:11:21.561 ] 00:11:21.561 }' 00:11:21.561 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.561 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.820 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:21.820 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:21.820 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:21.820 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.820 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.820 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.820 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.820 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:21.820 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:21.820 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:21.820 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.820 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.820 [2024-12-06 16:27:03.655924] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:22.080 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.080 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:22.080 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:22.080 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.080 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:22.080 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.080 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.080 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.080 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:22.080 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:22.080 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:22.080 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.080 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.080 [2024-12-06 16:27:03.723586] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:22.080 [2024-12-06 16:27:03.723687] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:11:22.080 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.080 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:22.080 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:22.080 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.081 BaseBdev2 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.081 [ 00:11:22.081 { 00:11:22.081 "name": "BaseBdev2", 00:11:22.081 "aliases": [ 00:11:22.081 "c4fe5337-62d3-47f6-a9b4-dded2ef9ced3" 00:11:22.081 ], 00:11:22.081 "product_name": "Malloc disk", 00:11:22.081 "block_size": 512, 00:11:22.081 "num_blocks": 65536, 00:11:22.081 "uuid": "c4fe5337-62d3-47f6-a9b4-dded2ef9ced3", 00:11:22.081 "assigned_rate_limits": { 00:11:22.081 "rw_ios_per_sec": 0, 00:11:22.081 "rw_mbytes_per_sec": 0, 00:11:22.081 "r_mbytes_per_sec": 0, 00:11:22.081 "w_mbytes_per_sec": 0 00:11:22.081 }, 00:11:22.081 "claimed": false, 00:11:22.081 "zoned": false, 00:11:22.081 "supported_io_types": { 00:11:22.081 "read": true, 00:11:22.081 "write": true, 00:11:22.081 "unmap": true, 00:11:22.081 "flush": true, 00:11:22.081 "reset": true, 00:11:22.081 "nvme_admin": false, 00:11:22.081 "nvme_io": false, 00:11:22.081 "nvme_io_md": false, 00:11:22.081 "write_zeroes": true, 00:11:22.081 "zcopy": true, 00:11:22.081 "get_zone_info": false, 00:11:22.081 "zone_management": false, 00:11:22.081 "zone_append": false, 00:11:22.081 "compare": false, 00:11:22.081 "compare_and_write": false, 00:11:22.081 "abort": true, 00:11:22.081 "seek_hole": false, 00:11:22.081 "seek_data": false, 00:11:22.081 "copy": true, 00:11:22.081 "nvme_iov_md": false 00:11:22.081 }, 00:11:22.081 "memory_domains": [ 00:11:22.081 { 00:11:22.081 "dma_device_id": "system", 00:11:22.081 "dma_device_type": 1 00:11:22.081 }, 00:11:22.081 { 00:11:22.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.081 "dma_device_type": 2 00:11:22.081 } 00:11:22.081 ], 00:11:22.081 "driver_specific": {} 00:11:22.081 } 00:11:22.081 ] 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.081 BaseBdev3 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.081 [ 00:11:22.081 { 00:11:22.081 "name": "BaseBdev3", 00:11:22.081 "aliases": [ 00:11:22.081 "6d5d4030-5481-4c36-a74a-6156ecb97ae5" 00:11:22.081 ], 00:11:22.081 "product_name": "Malloc disk", 00:11:22.081 "block_size": 512, 00:11:22.081 "num_blocks": 65536, 00:11:22.081 "uuid": "6d5d4030-5481-4c36-a74a-6156ecb97ae5", 00:11:22.081 "assigned_rate_limits": { 00:11:22.081 "rw_ios_per_sec": 0, 00:11:22.081 "rw_mbytes_per_sec": 0, 00:11:22.081 "r_mbytes_per_sec": 0, 00:11:22.081 "w_mbytes_per_sec": 0 00:11:22.081 }, 00:11:22.081 "claimed": false, 00:11:22.081 "zoned": false, 00:11:22.081 "supported_io_types": { 00:11:22.081 "read": true, 00:11:22.081 "write": true, 00:11:22.081 "unmap": true, 00:11:22.081 "flush": true, 00:11:22.081 "reset": true, 00:11:22.081 "nvme_admin": false, 00:11:22.081 "nvme_io": false, 00:11:22.081 "nvme_io_md": false, 00:11:22.081 "write_zeroes": true, 00:11:22.081 "zcopy": true, 00:11:22.081 "get_zone_info": false, 00:11:22.081 "zone_management": false, 00:11:22.081 "zone_append": false, 00:11:22.081 "compare": false, 00:11:22.081 "compare_and_write": false, 00:11:22.081 "abort": true, 00:11:22.081 "seek_hole": false, 00:11:22.081 "seek_data": false, 00:11:22.081 "copy": true, 00:11:22.081 "nvme_iov_md": false 00:11:22.081 }, 00:11:22.081 "memory_domains": [ 00:11:22.081 { 00:11:22.081 "dma_device_id": "system", 00:11:22.081 "dma_device_type": 1 00:11:22.081 }, 00:11:22.081 { 00:11:22.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.081 "dma_device_type": 2 00:11:22.081 } 00:11:22.081 ], 00:11:22.081 "driver_specific": {} 00:11:22.081 } 00:11:22.081 ] 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.081 [2024-12-06 16:27:03.897637] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:22.081 [2024-12-06 16:27:03.897766] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:22.081 [2024-12-06 16:27:03.897824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:22.081 [2024-12-06 16:27:03.899982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.081 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.082 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.082 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.082 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.082 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.341 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.341 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.341 "name": "Existed_Raid", 00:11:22.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.341 "strip_size_kb": 64, 00:11:22.341 "state": "configuring", 00:11:22.341 "raid_level": "concat", 00:11:22.341 "superblock": false, 00:11:22.341 "num_base_bdevs": 3, 00:11:22.341 "num_base_bdevs_discovered": 2, 00:11:22.341 "num_base_bdevs_operational": 3, 00:11:22.341 "base_bdevs_list": [ 00:11:22.341 { 00:11:22.341 "name": "BaseBdev1", 00:11:22.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.341 "is_configured": false, 00:11:22.341 "data_offset": 0, 00:11:22.341 "data_size": 0 00:11:22.341 }, 00:11:22.341 { 00:11:22.341 "name": "BaseBdev2", 00:11:22.341 "uuid": "c4fe5337-62d3-47f6-a9b4-dded2ef9ced3", 00:11:22.341 "is_configured": true, 00:11:22.341 "data_offset": 0, 00:11:22.341 "data_size": 65536 00:11:22.341 }, 00:11:22.341 { 00:11:22.341 "name": "BaseBdev3", 00:11:22.341 "uuid": "6d5d4030-5481-4c36-a74a-6156ecb97ae5", 00:11:22.341 "is_configured": true, 00:11:22.341 "data_offset": 0, 00:11:22.341 "data_size": 65536 00:11:22.341 } 00:11:22.341 ] 00:11:22.341 }' 00:11:22.341 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.341 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.601 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:22.601 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.601 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.601 [2024-12-06 16:27:04.328874] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:22.601 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.601 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:22.601 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.601 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.601 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:22.601 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.601 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:22.601 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.601 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.601 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.601 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.601 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.601 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.601 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.601 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.601 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.601 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.601 "name": "Existed_Raid", 00:11:22.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.601 "strip_size_kb": 64, 00:11:22.601 "state": "configuring", 00:11:22.601 "raid_level": "concat", 00:11:22.601 "superblock": false, 00:11:22.601 "num_base_bdevs": 3, 00:11:22.601 "num_base_bdevs_discovered": 1, 00:11:22.601 "num_base_bdevs_operational": 3, 00:11:22.601 "base_bdevs_list": [ 00:11:22.601 { 00:11:22.601 "name": "BaseBdev1", 00:11:22.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.601 "is_configured": false, 00:11:22.601 "data_offset": 0, 00:11:22.601 "data_size": 0 00:11:22.601 }, 00:11:22.601 { 00:11:22.601 "name": null, 00:11:22.601 "uuid": "c4fe5337-62d3-47f6-a9b4-dded2ef9ced3", 00:11:22.601 "is_configured": false, 00:11:22.601 "data_offset": 0, 00:11:22.601 "data_size": 65536 00:11:22.601 }, 00:11:22.601 { 00:11:22.601 "name": "BaseBdev3", 00:11:22.601 "uuid": "6d5d4030-5481-4c36-a74a-6156ecb97ae5", 00:11:22.601 "is_configured": true, 00:11:22.601 "data_offset": 0, 00:11:22.601 "data_size": 65536 00:11:22.601 } 00:11:22.601 ] 00:11:22.601 }' 00:11:22.601 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.601 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.169 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:23.169 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.169 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.169 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.169 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.169 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:23.169 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:23.169 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.169 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.169 [2024-12-06 16:27:04.807264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:23.169 BaseBdev1 00:11:23.169 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.169 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:23.169 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:23.169 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:23.169 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:23.169 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:23.169 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:23.169 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:23.169 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.169 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.169 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.169 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:23.169 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.169 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.169 [ 00:11:23.169 { 00:11:23.169 "name": "BaseBdev1", 00:11:23.169 "aliases": [ 00:11:23.169 "cd965334-521f-401b-a56f-89db7f9ad4e8" 00:11:23.169 ], 00:11:23.169 "product_name": "Malloc disk", 00:11:23.169 "block_size": 512, 00:11:23.169 "num_blocks": 65536, 00:11:23.169 "uuid": "cd965334-521f-401b-a56f-89db7f9ad4e8", 00:11:23.169 "assigned_rate_limits": { 00:11:23.169 "rw_ios_per_sec": 0, 00:11:23.169 "rw_mbytes_per_sec": 0, 00:11:23.170 "r_mbytes_per_sec": 0, 00:11:23.170 "w_mbytes_per_sec": 0 00:11:23.170 }, 00:11:23.170 "claimed": true, 00:11:23.170 "claim_type": "exclusive_write", 00:11:23.170 "zoned": false, 00:11:23.170 "supported_io_types": { 00:11:23.170 "read": true, 00:11:23.170 "write": true, 00:11:23.170 "unmap": true, 00:11:23.170 "flush": true, 00:11:23.170 "reset": true, 00:11:23.170 "nvme_admin": false, 00:11:23.170 "nvme_io": false, 00:11:23.170 "nvme_io_md": false, 00:11:23.170 "write_zeroes": true, 00:11:23.170 "zcopy": true, 00:11:23.170 "get_zone_info": false, 00:11:23.170 "zone_management": false, 00:11:23.170 "zone_append": false, 00:11:23.170 "compare": false, 00:11:23.170 "compare_and_write": false, 00:11:23.170 "abort": true, 00:11:23.170 "seek_hole": false, 00:11:23.170 "seek_data": false, 00:11:23.170 "copy": true, 00:11:23.170 "nvme_iov_md": false 00:11:23.170 }, 00:11:23.170 "memory_domains": [ 00:11:23.170 { 00:11:23.170 "dma_device_id": "system", 00:11:23.170 "dma_device_type": 1 00:11:23.170 }, 00:11:23.170 { 00:11:23.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.170 "dma_device_type": 2 00:11:23.170 } 00:11:23.170 ], 00:11:23.170 "driver_specific": {} 00:11:23.170 } 00:11:23.170 ] 00:11:23.170 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.170 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:23.170 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:23.170 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.170 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.170 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:23.170 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.170 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:23.170 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.170 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.170 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.170 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.170 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.170 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.170 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.170 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.170 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.170 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.170 "name": "Existed_Raid", 00:11:23.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.170 "strip_size_kb": 64, 00:11:23.170 "state": "configuring", 00:11:23.170 "raid_level": "concat", 00:11:23.170 "superblock": false, 00:11:23.170 "num_base_bdevs": 3, 00:11:23.170 "num_base_bdevs_discovered": 2, 00:11:23.170 "num_base_bdevs_operational": 3, 00:11:23.170 "base_bdevs_list": [ 00:11:23.170 { 00:11:23.170 "name": "BaseBdev1", 00:11:23.170 "uuid": "cd965334-521f-401b-a56f-89db7f9ad4e8", 00:11:23.170 "is_configured": true, 00:11:23.170 "data_offset": 0, 00:11:23.170 "data_size": 65536 00:11:23.170 }, 00:11:23.170 { 00:11:23.170 "name": null, 00:11:23.170 "uuid": "c4fe5337-62d3-47f6-a9b4-dded2ef9ced3", 00:11:23.170 "is_configured": false, 00:11:23.170 "data_offset": 0, 00:11:23.170 "data_size": 65536 00:11:23.170 }, 00:11:23.170 { 00:11:23.170 "name": "BaseBdev3", 00:11:23.170 "uuid": "6d5d4030-5481-4c36-a74a-6156ecb97ae5", 00:11:23.170 "is_configured": true, 00:11:23.170 "data_offset": 0, 00:11:23.170 "data_size": 65536 00:11:23.170 } 00:11:23.170 ] 00:11:23.170 }' 00:11:23.170 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.170 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.500 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.500 16:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.500 16:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.500 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:23.500 16:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.500 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:23.500 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:23.500 16:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.500 16:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.500 [2024-12-06 16:27:05.326464] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:23.500 16:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.500 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:23.500 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.500 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.500 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:23.500 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.500 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:23.500 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.500 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.500 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.500 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.758 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.758 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.758 16:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.758 16:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.758 16:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.758 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.758 "name": "Existed_Raid", 00:11:23.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.758 "strip_size_kb": 64, 00:11:23.758 "state": "configuring", 00:11:23.758 "raid_level": "concat", 00:11:23.758 "superblock": false, 00:11:23.758 "num_base_bdevs": 3, 00:11:23.758 "num_base_bdevs_discovered": 1, 00:11:23.758 "num_base_bdevs_operational": 3, 00:11:23.758 "base_bdevs_list": [ 00:11:23.758 { 00:11:23.758 "name": "BaseBdev1", 00:11:23.758 "uuid": "cd965334-521f-401b-a56f-89db7f9ad4e8", 00:11:23.758 "is_configured": true, 00:11:23.758 "data_offset": 0, 00:11:23.758 "data_size": 65536 00:11:23.758 }, 00:11:23.758 { 00:11:23.758 "name": null, 00:11:23.758 "uuid": "c4fe5337-62d3-47f6-a9b4-dded2ef9ced3", 00:11:23.758 "is_configured": false, 00:11:23.758 "data_offset": 0, 00:11:23.758 "data_size": 65536 00:11:23.758 }, 00:11:23.758 { 00:11:23.758 "name": null, 00:11:23.758 "uuid": "6d5d4030-5481-4c36-a74a-6156ecb97ae5", 00:11:23.758 "is_configured": false, 00:11:23.758 "data_offset": 0, 00:11:23.758 "data_size": 65536 00:11:23.758 } 00:11:23.758 ] 00:11:23.758 }' 00:11:23.758 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.758 16:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.017 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.017 16:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.017 16:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.017 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:24.017 16:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.017 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:24.017 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:24.017 16:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.017 16:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.017 [2024-12-06 16:27:05.805648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:24.017 16:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.017 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:24.017 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.017 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.017 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:24.017 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.017 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:24.017 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.017 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.017 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.017 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.017 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.017 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.017 16:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.017 16:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.017 16:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.275 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.275 "name": "Existed_Raid", 00:11:24.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.275 "strip_size_kb": 64, 00:11:24.275 "state": "configuring", 00:11:24.275 "raid_level": "concat", 00:11:24.275 "superblock": false, 00:11:24.275 "num_base_bdevs": 3, 00:11:24.275 "num_base_bdevs_discovered": 2, 00:11:24.275 "num_base_bdevs_operational": 3, 00:11:24.275 "base_bdevs_list": [ 00:11:24.275 { 00:11:24.275 "name": "BaseBdev1", 00:11:24.275 "uuid": "cd965334-521f-401b-a56f-89db7f9ad4e8", 00:11:24.275 "is_configured": true, 00:11:24.275 "data_offset": 0, 00:11:24.275 "data_size": 65536 00:11:24.275 }, 00:11:24.275 { 00:11:24.275 "name": null, 00:11:24.275 "uuid": "c4fe5337-62d3-47f6-a9b4-dded2ef9ced3", 00:11:24.275 "is_configured": false, 00:11:24.275 "data_offset": 0, 00:11:24.275 "data_size": 65536 00:11:24.275 }, 00:11:24.275 { 00:11:24.275 "name": "BaseBdev3", 00:11:24.275 "uuid": "6d5d4030-5481-4c36-a74a-6156ecb97ae5", 00:11:24.275 "is_configured": true, 00:11:24.275 "data_offset": 0, 00:11:24.275 "data_size": 65536 00:11:24.275 } 00:11:24.275 ] 00:11:24.275 }' 00:11:24.275 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.275 16:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.533 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.533 16:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.533 16:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.533 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:24.533 16:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.533 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:24.533 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:24.533 16:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.533 16:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.533 [2024-12-06 16:27:06.268950] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:24.533 16:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.533 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:24.533 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.533 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.533 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:24.533 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.533 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:24.533 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.533 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.533 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.533 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.533 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.533 16:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.533 16:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.533 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.533 16:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.533 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.533 "name": "Existed_Raid", 00:11:24.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.533 "strip_size_kb": 64, 00:11:24.533 "state": "configuring", 00:11:24.533 "raid_level": "concat", 00:11:24.533 "superblock": false, 00:11:24.533 "num_base_bdevs": 3, 00:11:24.533 "num_base_bdevs_discovered": 1, 00:11:24.533 "num_base_bdevs_operational": 3, 00:11:24.533 "base_bdevs_list": [ 00:11:24.533 { 00:11:24.533 "name": null, 00:11:24.533 "uuid": "cd965334-521f-401b-a56f-89db7f9ad4e8", 00:11:24.533 "is_configured": false, 00:11:24.533 "data_offset": 0, 00:11:24.533 "data_size": 65536 00:11:24.533 }, 00:11:24.533 { 00:11:24.533 "name": null, 00:11:24.533 "uuid": "c4fe5337-62d3-47f6-a9b4-dded2ef9ced3", 00:11:24.533 "is_configured": false, 00:11:24.533 "data_offset": 0, 00:11:24.533 "data_size": 65536 00:11:24.533 }, 00:11:24.533 { 00:11:24.533 "name": "BaseBdev3", 00:11:24.533 "uuid": "6d5d4030-5481-4c36-a74a-6156ecb97ae5", 00:11:24.533 "is_configured": true, 00:11:24.533 "data_offset": 0, 00:11:24.533 "data_size": 65536 00:11:24.533 } 00:11:24.533 ] 00:11:24.533 }' 00:11:24.533 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.533 16:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.100 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:25.100 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.100 16:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.100 16:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.100 16:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.100 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:25.100 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:25.100 16:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.100 16:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.100 [2024-12-06 16:27:06.791000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:25.100 16:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.100 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:25.100 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.100 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.100 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:25.100 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.100 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:25.100 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.100 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.100 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.100 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.100 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.100 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.100 16:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.100 16:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.100 16:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.100 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.100 "name": "Existed_Raid", 00:11:25.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.100 "strip_size_kb": 64, 00:11:25.100 "state": "configuring", 00:11:25.100 "raid_level": "concat", 00:11:25.100 "superblock": false, 00:11:25.100 "num_base_bdevs": 3, 00:11:25.100 "num_base_bdevs_discovered": 2, 00:11:25.100 "num_base_bdevs_operational": 3, 00:11:25.100 "base_bdevs_list": [ 00:11:25.100 { 00:11:25.100 "name": null, 00:11:25.100 "uuid": "cd965334-521f-401b-a56f-89db7f9ad4e8", 00:11:25.100 "is_configured": false, 00:11:25.100 "data_offset": 0, 00:11:25.100 "data_size": 65536 00:11:25.100 }, 00:11:25.100 { 00:11:25.100 "name": "BaseBdev2", 00:11:25.100 "uuid": "c4fe5337-62d3-47f6-a9b4-dded2ef9ced3", 00:11:25.100 "is_configured": true, 00:11:25.100 "data_offset": 0, 00:11:25.100 "data_size": 65536 00:11:25.100 }, 00:11:25.100 { 00:11:25.100 "name": "BaseBdev3", 00:11:25.100 "uuid": "6d5d4030-5481-4c36-a74a-6156ecb97ae5", 00:11:25.100 "is_configured": true, 00:11:25.100 "data_offset": 0, 00:11:25.100 "data_size": 65536 00:11:25.100 } 00:11:25.100 ] 00:11:25.100 }' 00:11:25.100 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.100 16:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.693 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.693 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.693 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.693 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:25.693 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.693 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:25.693 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.693 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:25.693 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.693 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.693 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.693 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u cd965334-521f-401b-a56f-89db7f9ad4e8 00:11:25.693 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.693 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.693 [2024-12-06 16:27:07.361436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:25.693 [2024-12-06 16:27:07.361569] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:11:25.693 [2024-12-06 16:27:07.361599] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:25.693 [2024-12-06 16:27:07.361890] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:25.693 [2024-12-06 16:27:07.362059] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:11:25.693 [2024-12-06 16:27:07.362103] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:11:25.693 [2024-12-06 16:27:07.362348] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:25.693 NewBaseBdev 00:11:25.693 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.693 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:25.693 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:25.693 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:25.693 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:25.693 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:25.693 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:25.693 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:25.693 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.693 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.693 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.693 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:25.693 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.693 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.693 [ 00:11:25.693 { 00:11:25.693 "name": "NewBaseBdev", 00:11:25.693 "aliases": [ 00:11:25.693 "cd965334-521f-401b-a56f-89db7f9ad4e8" 00:11:25.693 ], 00:11:25.693 "product_name": "Malloc disk", 00:11:25.693 "block_size": 512, 00:11:25.693 "num_blocks": 65536, 00:11:25.693 "uuid": "cd965334-521f-401b-a56f-89db7f9ad4e8", 00:11:25.693 "assigned_rate_limits": { 00:11:25.693 "rw_ios_per_sec": 0, 00:11:25.693 "rw_mbytes_per_sec": 0, 00:11:25.693 "r_mbytes_per_sec": 0, 00:11:25.693 "w_mbytes_per_sec": 0 00:11:25.693 }, 00:11:25.693 "claimed": true, 00:11:25.693 "claim_type": "exclusive_write", 00:11:25.693 "zoned": false, 00:11:25.693 "supported_io_types": { 00:11:25.693 "read": true, 00:11:25.693 "write": true, 00:11:25.693 "unmap": true, 00:11:25.693 "flush": true, 00:11:25.693 "reset": true, 00:11:25.693 "nvme_admin": false, 00:11:25.693 "nvme_io": false, 00:11:25.693 "nvme_io_md": false, 00:11:25.693 "write_zeroes": true, 00:11:25.693 "zcopy": true, 00:11:25.693 "get_zone_info": false, 00:11:25.693 "zone_management": false, 00:11:25.693 "zone_append": false, 00:11:25.693 "compare": false, 00:11:25.693 "compare_and_write": false, 00:11:25.693 "abort": true, 00:11:25.693 "seek_hole": false, 00:11:25.693 "seek_data": false, 00:11:25.693 "copy": true, 00:11:25.693 "nvme_iov_md": false 00:11:25.693 }, 00:11:25.693 "memory_domains": [ 00:11:25.693 { 00:11:25.693 "dma_device_id": "system", 00:11:25.693 "dma_device_type": 1 00:11:25.693 }, 00:11:25.693 { 00:11:25.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.693 "dma_device_type": 2 00:11:25.693 } 00:11:25.693 ], 00:11:25.693 "driver_specific": {} 00:11:25.693 } 00:11:25.693 ] 00:11:25.693 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.693 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:25.693 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:25.693 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.693 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:25.693 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:25.693 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.693 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:25.693 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.693 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.693 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.693 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.693 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.693 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.693 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.693 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.693 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.693 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.693 "name": "Existed_Raid", 00:11:25.693 "uuid": "5a9278d2-d4d1-41a4-8278-30389ae90484", 00:11:25.693 "strip_size_kb": 64, 00:11:25.693 "state": "online", 00:11:25.693 "raid_level": "concat", 00:11:25.693 "superblock": false, 00:11:25.693 "num_base_bdevs": 3, 00:11:25.693 "num_base_bdevs_discovered": 3, 00:11:25.693 "num_base_bdevs_operational": 3, 00:11:25.693 "base_bdevs_list": [ 00:11:25.693 { 00:11:25.693 "name": "NewBaseBdev", 00:11:25.693 "uuid": "cd965334-521f-401b-a56f-89db7f9ad4e8", 00:11:25.693 "is_configured": true, 00:11:25.693 "data_offset": 0, 00:11:25.693 "data_size": 65536 00:11:25.693 }, 00:11:25.693 { 00:11:25.693 "name": "BaseBdev2", 00:11:25.693 "uuid": "c4fe5337-62d3-47f6-a9b4-dded2ef9ced3", 00:11:25.693 "is_configured": true, 00:11:25.693 "data_offset": 0, 00:11:25.693 "data_size": 65536 00:11:25.693 }, 00:11:25.693 { 00:11:25.693 "name": "BaseBdev3", 00:11:25.693 "uuid": "6d5d4030-5481-4c36-a74a-6156ecb97ae5", 00:11:25.693 "is_configured": true, 00:11:25.693 "data_offset": 0, 00:11:25.693 "data_size": 65536 00:11:25.693 } 00:11:25.693 ] 00:11:25.693 }' 00:11:25.693 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.693 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.262 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:26.262 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:26.262 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:26.262 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:26.262 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:26.262 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:26.262 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:26.262 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.262 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:26.262 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.262 [2024-12-06 16:27:07.884954] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:26.262 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.262 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:26.262 "name": "Existed_Raid", 00:11:26.262 "aliases": [ 00:11:26.262 "5a9278d2-d4d1-41a4-8278-30389ae90484" 00:11:26.262 ], 00:11:26.262 "product_name": "Raid Volume", 00:11:26.262 "block_size": 512, 00:11:26.262 "num_blocks": 196608, 00:11:26.262 "uuid": "5a9278d2-d4d1-41a4-8278-30389ae90484", 00:11:26.262 "assigned_rate_limits": { 00:11:26.262 "rw_ios_per_sec": 0, 00:11:26.262 "rw_mbytes_per_sec": 0, 00:11:26.262 "r_mbytes_per_sec": 0, 00:11:26.262 "w_mbytes_per_sec": 0 00:11:26.262 }, 00:11:26.262 "claimed": false, 00:11:26.262 "zoned": false, 00:11:26.262 "supported_io_types": { 00:11:26.262 "read": true, 00:11:26.262 "write": true, 00:11:26.262 "unmap": true, 00:11:26.262 "flush": true, 00:11:26.262 "reset": true, 00:11:26.262 "nvme_admin": false, 00:11:26.262 "nvme_io": false, 00:11:26.262 "nvme_io_md": false, 00:11:26.262 "write_zeroes": true, 00:11:26.262 "zcopy": false, 00:11:26.262 "get_zone_info": false, 00:11:26.262 "zone_management": false, 00:11:26.262 "zone_append": false, 00:11:26.262 "compare": false, 00:11:26.262 "compare_and_write": false, 00:11:26.262 "abort": false, 00:11:26.262 "seek_hole": false, 00:11:26.262 "seek_data": false, 00:11:26.262 "copy": false, 00:11:26.262 "nvme_iov_md": false 00:11:26.262 }, 00:11:26.262 "memory_domains": [ 00:11:26.262 { 00:11:26.262 "dma_device_id": "system", 00:11:26.262 "dma_device_type": 1 00:11:26.262 }, 00:11:26.262 { 00:11:26.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.262 "dma_device_type": 2 00:11:26.262 }, 00:11:26.262 { 00:11:26.262 "dma_device_id": "system", 00:11:26.262 "dma_device_type": 1 00:11:26.262 }, 00:11:26.262 { 00:11:26.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.262 "dma_device_type": 2 00:11:26.262 }, 00:11:26.262 { 00:11:26.262 "dma_device_id": "system", 00:11:26.262 "dma_device_type": 1 00:11:26.262 }, 00:11:26.262 { 00:11:26.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.262 "dma_device_type": 2 00:11:26.262 } 00:11:26.262 ], 00:11:26.262 "driver_specific": { 00:11:26.262 "raid": { 00:11:26.262 "uuid": "5a9278d2-d4d1-41a4-8278-30389ae90484", 00:11:26.262 "strip_size_kb": 64, 00:11:26.262 "state": "online", 00:11:26.262 "raid_level": "concat", 00:11:26.262 "superblock": false, 00:11:26.262 "num_base_bdevs": 3, 00:11:26.262 "num_base_bdevs_discovered": 3, 00:11:26.262 "num_base_bdevs_operational": 3, 00:11:26.262 "base_bdevs_list": [ 00:11:26.262 { 00:11:26.262 "name": "NewBaseBdev", 00:11:26.262 "uuid": "cd965334-521f-401b-a56f-89db7f9ad4e8", 00:11:26.262 "is_configured": true, 00:11:26.262 "data_offset": 0, 00:11:26.262 "data_size": 65536 00:11:26.262 }, 00:11:26.262 { 00:11:26.262 "name": "BaseBdev2", 00:11:26.262 "uuid": "c4fe5337-62d3-47f6-a9b4-dded2ef9ced3", 00:11:26.262 "is_configured": true, 00:11:26.262 "data_offset": 0, 00:11:26.262 "data_size": 65536 00:11:26.262 }, 00:11:26.262 { 00:11:26.262 "name": "BaseBdev3", 00:11:26.262 "uuid": "6d5d4030-5481-4c36-a74a-6156ecb97ae5", 00:11:26.262 "is_configured": true, 00:11:26.262 "data_offset": 0, 00:11:26.262 "data_size": 65536 00:11:26.262 } 00:11:26.262 ] 00:11:26.262 } 00:11:26.262 } 00:11:26.262 }' 00:11:26.262 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:26.262 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:26.262 BaseBdev2 00:11:26.262 BaseBdev3' 00:11:26.262 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.262 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:26.262 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.262 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:26.262 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.262 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.262 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.262 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.262 16:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.262 16:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.262 16:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.262 16:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.262 16:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:26.262 16:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.262 16:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.262 16:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.262 16:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.262 16:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.262 16:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.262 16:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.262 16:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:26.262 16:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.262 16:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.522 16:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.522 16:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.522 16:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.522 16:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:26.522 16:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.522 16:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.522 [2024-12-06 16:27:08.132219] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:26.522 [2024-12-06 16:27:08.132298] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:26.522 [2024-12-06 16:27:08.132413] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:26.522 [2024-12-06 16:27:08.132503] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:26.522 [2024-12-06 16:27:08.132582] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:11:26.522 16:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.522 16:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 77120 00:11:26.522 16:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 77120 ']' 00:11:26.522 16:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 77120 00:11:26.522 16:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:26.522 16:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:26.522 16:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77120 00:11:26.522 killing process with pid 77120 00:11:26.522 16:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:26.522 16:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:26.522 16:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77120' 00:11:26.522 16:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 77120 00:11:26.522 [2024-12-06 16:27:08.184012] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:26.522 16:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 77120 00:11:26.522 [2024-12-06 16:27:08.216989] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:26.782 16:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:26.782 00:11:26.782 real 0m8.835s 00:11:26.782 user 0m15.088s 00:11:26.782 sys 0m1.751s 00:11:26.782 ************************************ 00:11:26.782 END TEST raid_state_function_test 00:11:26.782 ************************************ 00:11:26.782 16:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:26.782 16:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.782 16:27:08 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:11:26.782 16:27:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:26.782 16:27:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:26.782 16:27:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:26.782 ************************************ 00:11:26.782 START TEST raid_state_function_test_sb 00:11:26.782 ************************************ 00:11:26.782 16:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:11:26.782 16:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:26.782 16:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:26.782 16:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:26.782 16:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:26.782 16:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:26.782 16:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:26.782 16:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:26.782 16:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:26.782 16:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:26.782 16:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:26.782 16:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:26.782 16:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:26.782 16:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:26.782 16:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:26.782 16:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:26.782 16:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:26.782 16:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:26.782 16:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:26.782 16:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:26.782 16:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:26.782 16:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:26.782 16:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:26.782 16:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:26.782 16:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:26.782 16:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:26.782 16:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:26.782 16:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=77725 00:11:26.782 16:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:26.782 Process raid pid: 77725 00:11:26.782 16:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 77725' 00:11:26.782 16:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 77725 00:11:26.782 16:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 77725 ']' 00:11:26.782 16:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.782 16:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:26.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.782 16:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.782 16:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:26.782 16:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.782 [2024-12-06 16:27:08.617521] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:11:26.782 [2024-12-06 16:27:08.617747] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:27.042 [2024-12-06 16:27:08.790129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.042 [2024-12-06 16:27:08.821134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.042 [2024-12-06 16:27:08.866685] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:27.042 [2024-12-06 16:27:08.866727] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:27.981 16:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:27.981 16:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:27.981 16:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:27.981 16:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.981 16:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.981 [2024-12-06 16:27:09.478721] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:27.981 [2024-12-06 16:27:09.478880] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:27.981 [2024-12-06 16:27:09.478920] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:27.981 [2024-12-06 16:27:09.478947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:27.981 [2024-12-06 16:27:09.478971] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:27.981 [2024-12-06 16:27:09.478997] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:27.981 16:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.981 16:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:27.981 16:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.981 16:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.981 16:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:27.981 16:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.981 16:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:27.981 16:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.981 16:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.981 16:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.981 16:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.981 16:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.981 16:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.981 16:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.981 16:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.981 16:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.981 16:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.981 "name": "Existed_Raid", 00:11:27.981 "uuid": "b3060fde-83d2-41da-84bd-6280166f2410", 00:11:27.981 "strip_size_kb": 64, 00:11:27.981 "state": "configuring", 00:11:27.981 "raid_level": "concat", 00:11:27.981 "superblock": true, 00:11:27.981 "num_base_bdevs": 3, 00:11:27.981 "num_base_bdevs_discovered": 0, 00:11:27.981 "num_base_bdevs_operational": 3, 00:11:27.981 "base_bdevs_list": [ 00:11:27.981 { 00:11:27.981 "name": "BaseBdev1", 00:11:27.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.981 "is_configured": false, 00:11:27.981 "data_offset": 0, 00:11:27.981 "data_size": 0 00:11:27.981 }, 00:11:27.981 { 00:11:27.981 "name": "BaseBdev2", 00:11:27.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.981 "is_configured": false, 00:11:27.981 "data_offset": 0, 00:11:27.981 "data_size": 0 00:11:27.981 }, 00:11:27.981 { 00:11:27.981 "name": "BaseBdev3", 00:11:27.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.981 "is_configured": false, 00:11:27.981 "data_offset": 0, 00:11:27.981 "data_size": 0 00:11:27.981 } 00:11:27.981 ] 00:11:27.981 }' 00:11:27.981 16:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.981 16:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.241 16:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:28.241 16:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.241 16:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.241 [2024-12-06 16:27:09.913891] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:28.241 [2024-12-06 16:27:09.913981] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:11:28.241 16:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.241 16:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:28.241 16:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.241 16:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.241 [2024-12-06 16:27:09.925894] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:28.241 [2024-12-06 16:27:09.925973] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:28.241 [2024-12-06 16:27:09.926022] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:28.241 [2024-12-06 16:27:09.926049] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:28.241 [2024-12-06 16:27:09.926079] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:28.241 [2024-12-06 16:27:09.926106] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:28.241 16:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.241 16:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:28.241 16:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.241 16:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.241 [2024-12-06 16:27:09.947441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:28.241 BaseBdev1 00:11:28.241 16:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.241 16:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:28.241 16:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:28.241 16:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:28.241 16:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:28.242 16:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:28.242 16:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:28.242 16:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:28.242 16:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.242 16:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.242 16:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.242 16:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:28.242 16:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.242 16:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.242 [ 00:11:28.242 { 00:11:28.242 "name": "BaseBdev1", 00:11:28.242 "aliases": [ 00:11:28.242 "5cba8f8b-fa8a-4fb8-a901-61c4eb77b9ad" 00:11:28.242 ], 00:11:28.242 "product_name": "Malloc disk", 00:11:28.242 "block_size": 512, 00:11:28.242 "num_blocks": 65536, 00:11:28.242 "uuid": "5cba8f8b-fa8a-4fb8-a901-61c4eb77b9ad", 00:11:28.242 "assigned_rate_limits": { 00:11:28.242 "rw_ios_per_sec": 0, 00:11:28.242 "rw_mbytes_per_sec": 0, 00:11:28.242 "r_mbytes_per_sec": 0, 00:11:28.242 "w_mbytes_per_sec": 0 00:11:28.242 }, 00:11:28.242 "claimed": true, 00:11:28.242 "claim_type": "exclusive_write", 00:11:28.242 "zoned": false, 00:11:28.242 "supported_io_types": { 00:11:28.242 "read": true, 00:11:28.242 "write": true, 00:11:28.242 "unmap": true, 00:11:28.242 "flush": true, 00:11:28.242 "reset": true, 00:11:28.242 "nvme_admin": false, 00:11:28.242 "nvme_io": false, 00:11:28.242 "nvme_io_md": false, 00:11:28.242 "write_zeroes": true, 00:11:28.242 "zcopy": true, 00:11:28.242 "get_zone_info": false, 00:11:28.242 "zone_management": false, 00:11:28.242 "zone_append": false, 00:11:28.242 "compare": false, 00:11:28.242 "compare_and_write": false, 00:11:28.242 "abort": true, 00:11:28.242 "seek_hole": false, 00:11:28.242 "seek_data": false, 00:11:28.242 "copy": true, 00:11:28.242 "nvme_iov_md": false 00:11:28.242 }, 00:11:28.242 "memory_domains": [ 00:11:28.242 { 00:11:28.242 "dma_device_id": "system", 00:11:28.242 "dma_device_type": 1 00:11:28.242 }, 00:11:28.242 { 00:11:28.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.242 "dma_device_type": 2 00:11:28.242 } 00:11:28.242 ], 00:11:28.242 "driver_specific": {} 00:11:28.242 } 00:11:28.242 ] 00:11:28.242 16:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.242 16:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:28.242 16:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:28.242 16:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.242 16:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.242 16:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:28.242 16:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.242 16:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:28.242 16:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.242 16:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.242 16:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.242 16:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.242 16:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.242 16:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.242 16:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.242 16:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.242 16:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.242 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.242 "name": "Existed_Raid", 00:11:28.242 "uuid": "cf407a2b-f3cf-4315-9a2c-6147a396207d", 00:11:28.242 "strip_size_kb": 64, 00:11:28.242 "state": "configuring", 00:11:28.242 "raid_level": "concat", 00:11:28.242 "superblock": true, 00:11:28.242 "num_base_bdevs": 3, 00:11:28.242 "num_base_bdevs_discovered": 1, 00:11:28.242 "num_base_bdevs_operational": 3, 00:11:28.242 "base_bdevs_list": [ 00:11:28.242 { 00:11:28.242 "name": "BaseBdev1", 00:11:28.242 "uuid": "5cba8f8b-fa8a-4fb8-a901-61c4eb77b9ad", 00:11:28.242 "is_configured": true, 00:11:28.242 "data_offset": 2048, 00:11:28.242 "data_size": 63488 00:11:28.242 }, 00:11:28.242 { 00:11:28.242 "name": "BaseBdev2", 00:11:28.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.242 "is_configured": false, 00:11:28.242 "data_offset": 0, 00:11:28.242 "data_size": 0 00:11:28.242 }, 00:11:28.242 { 00:11:28.242 "name": "BaseBdev3", 00:11:28.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.242 "is_configured": false, 00:11:28.242 "data_offset": 0, 00:11:28.242 "data_size": 0 00:11:28.242 } 00:11:28.242 ] 00:11:28.242 }' 00:11:28.242 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.242 16:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.811 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:28.811 16:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.811 16:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.811 [2024-12-06 16:27:10.470625] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:28.811 [2024-12-06 16:27:10.470742] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:11:28.811 16:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.811 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:28.811 16:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.811 16:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.811 [2024-12-06 16:27:10.482659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:28.811 [2024-12-06 16:27:10.484713] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:28.811 [2024-12-06 16:27:10.484820] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:28.811 [2024-12-06 16:27:10.484855] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:28.811 [2024-12-06 16:27:10.484892] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:28.811 16:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.811 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:28.811 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:28.811 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:28.811 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.811 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.811 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:28.811 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.811 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:28.811 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.811 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.811 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.811 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.811 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.811 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.811 16:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.811 16:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.811 16:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.811 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.811 "name": "Existed_Raid", 00:11:28.811 "uuid": "9174653b-231f-4c02-badc-816708ee372b", 00:11:28.811 "strip_size_kb": 64, 00:11:28.811 "state": "configuring", 00:11:28.811 "raid_level": "concat", 00:11:28.811 "superblock": true, 00:11:28.811 "num_base_bdevs": 3, 00:11:28.811 "num_base_bdevs_discovered": 1, 00:11:28.811 "num_base_bdevs_operational": 3, 00:11:28.811 "base_bdevs_list": [ 00:11:28.811 { 00:11:28.811 "name": "BaseBdev1", 00:11:28.811 "uuid": "5cba8f8b-fa8a-4fb8-a901-61c4eb77b9ad", 00:11:28.811 "is_configured": true, 00:11:28.811 "data_offset": 2048, 00:11:28.811 "data_size": 63488 00:11:28.811 }, 00:11:28.811 { 00:11:28.811 "name": "BaseBdev2", 00:11:28.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.811 "is_configured": false, 00:11:28.811 "data_offset": 0, 00:11:28.811 "data_size": 0 00:11:28.811 }, 00:11:28.811 { 00:11:28.811 "name": "BaseBdev3", 00:11:28.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.811 "is_configured": false, 00:11:28.811 "data_offset": 0, 00:11:28.811 "data_size": 0 00:11:28.811 } 00:11:28.811 ] 00:11:28.811 }' 00:11:28.811 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.811 16:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.381 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:29.381 16:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.381 16:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.381 [2024-12-06 16:27:10.941145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:29.381 BaseBdev2 00:11:29.381 16:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.381 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:29.381 16:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:29.381 16:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:29.381 16:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:29.381 16:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:29.381 16:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:29.381 16:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:29.381 16:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.381 16:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.382 16:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.382 16:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:29.382 16:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.382 16:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.382 [ 00:11:29.382 { 00:11:29.382 "name": "BaseBdev2", 00:11:29.382 "aliases": [ 00:11:29.382 "ae03ab9c-aee5-454d-9132-3ad8b51a2e24" 00:11:29.382 ], 00:11:29.382 "product_name": "Malloc disk", 00:11:29.382 "block_size": 512, 00:11:29.382 "num_blocks": 65536, 00:11:29.382 "uuid": "ae03ab9c-aee5-454d-9132-3ad8b51a2e24", 00:11:29.382 "assigned_rate_limits": { 00:11:29.382 "rw_ios_per_sec": 0, 00:11:29.382 "rw_mbytes_per_sec": 0, 00:11:29.382 "r_mbytes_per_sec": 0, 00:11:29.382 "w_mbytes_per_sec": 0 00:11:29.382 }, 00:11:29.382 "claimed": true, 00:11:29.382 "claim_type": "exclusive_write", 00:11:29.382 "zoned": false, 00:11:29.382 "supported_io_types": { 00:11:29.382 "read": true, 00:11:29.382 "write": true, 00:11:29.382 "unmap": true, 00:11:29.382 "flush": true, 00:11:29.382 "reset": true, 00:11:29.382 "nvme_admin": false, 00:11:29.382 "nvme_io": false, 00:11:29.382 "nvme_io_md": false, 00:11:29.382 "write_zeroes": true, 00:11:29.382 "zcopy": true, 00:11:29.382 "get_zone_info": false, 00:11:29.382 "zone_management": false, 00:11:29.382 "zone_append": false, 00:11:29.382 "compare": false, 00:11:29.382 "compare_and_write": false, 00:11:29.382 "abort": true, 00:11:29.382 "seek_hole": false, 00:11:29.382 "seek_data": false, 00:11:29.382 "copy": true, 00:11:29.382 "nvme_iov_md": false 00:11:29.382 }, 00:11:29.382 "memory_domains": [ 00:11:29.382 { 00:11:29.382 "dma_device_id": "system", 00:11:29.382 "dma_device_type": 1 00:11:29.382 }, 00:11:29.382 { 00:11:29.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.382 "dma_device_type": 2 00:11:29.382 } 00:11:29.382 ], 00:11:29.382 "driver_specific": {} 00:11:29.382 } 00:11:29.382 ] 00:11:29.382 16:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.382 16:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:29.382 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:29.382 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:29.382 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:29.382 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.382 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.382 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:29.382 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.382 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:29.382 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.382 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.382 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.382 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.382 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.382 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.382 16:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.382 16:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.382 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.382 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.382 "name": "Existed_Raid", 00:11:29.382 "uuid": "9174653b-231f-4c02-badc-816708ee372b", 00:11:29.382 "strip_size_kb": 64, 00:11:29.382 "state": "configuring", 00:11:29.382 "raid_level": "concat", 00:11:29.382 "superblock": true, 00:11:29.382 "num_base_bdevs": 3, 00:11:29.382 "num_base_bdevs_discovered": 2, 00:11:29.382 "num_base_bdevs_operational": 3, 00:11:29.382 "base_bdevs_list": [ 00:11:29.382 { 00:11:29.382 "name": "BaseBdev1", 00:11:29.382 "uuid": "5cba8f8b-fa8a-4fb8-a901-61c4eb77b9ad", 00:11:29.382 "is_configured": true, 00:11:29.382 "data_offset": 2048, 00:11:29.382 "data_size": 63488 00:11:29.382 }, 00:11:29.382 { 00:11:29.382 "name": "BaseBdev2", 00:11:29.382 "uuid": "ae03ab9c-aee5-454d-9132-3ad8b51a2e24", 00:11:29.382 "is_configured": true, 00:11:29.382 "data_offset": 2048, 00:11:29.382 "data_size": 63488 00:11:29.382 }, 00:11:29.382 { 00:11:29.382 "name": "BaseBdev3", 00:11:29.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.382 "is_configured": false, 00:11:29.382 "data_offset": 0, 00:11:29.382 "data_size": 0 00:11:29.382 } 00:11:29.382 ] 00:11:29.382 }' 00:11:29.382 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.382 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.642 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:29.642 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.642 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.642 [2024-12-06 16:27:11.440751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:29.642 BaseBdev3 00:11:29.642 [2024-12-06 16:27:11.441181] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:11:29.642 [2024-12-06 16:27:11.441231] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:29.642 [2024-12-06 16:27:11.441657] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:11:29.642 [2024-12-06 16:27:11.441854] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:11:29.642 [2024-12-06 16:27:11.441871] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:11:29.642 [2024-12-06 16:27:11.442043] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:29.642 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.642 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:29.642 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:29.642 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:29.642 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:29.642 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:29.642 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:29.642 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:29.642 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.642 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.642 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.642 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:29.642 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.642 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.642 [ 00:11:29.642 { 00:11:29.642 "name": "BaseBdev3", 00:11:29.642 "aliases": [ 00:11:29.642 "78e2c82e-288c-41d2-975c-c0b35f60f0a7" 00:11:29.642 ], 00:11:29.642 "product_name": "Malloc disk", 00:11:29.642 "block_size": 512, 00:11:29.642 "num_blocks": 65536, 00:11:29.642 "uuid": "78e2c82e-288c-41d2-975c-c0b35f60f0a7", 00:11:29.642 "assigned_rate_limits": { 00:11:29.642 "rw_ios_per_sec": 0, 00:11:29.642 "rw_mbytes_per_sec": 0, 00:11:29.642 "r_mbytes_per_sec": 0, 00:11:29.642 "w_mbytes_per_sec": 0 00:11:29.642 }, 00:11:29.642 "claimed": true, 00:11:29.642 "claim_type": "exclusive_write", 00:11:29.642 "zoned": false, 00:11:29.642 "supported_io_types": { 00:11:29.642 "read": true, 00:11:29.642 "write": true, 00:11:29.642 "unmap": true, 00:11:29.642 "flush": true, 00:11:29.642 "reset": true, 00:11:29.642 "nvme_admin": false, 00:11:29.642 "nvme_io": false, 00:11:29.642 "nvme_io_md": false, 00:11:29.642 "write_zeroes": true, 00:11:29.642 "zcopy": true, 00:11:29.642 "get_zone_info": false, 00:11:29.642 "zone_management": false, 00:11:29.642 "zone_append": false, 00:11:29.642 "compare": false, 00:11:29.642 "compare_and_write": false, 00:11:29.642 "abort": true, 00:11:29.642 "seek_hole": false, 00:11:29.642 "seek_data": false, 00:11:29.642 "copy": true, 00:11:29.903 "nvme_iov_md": false 00:11:29.903 }, 00:11:29.903 "memory_domains": [ 00:11:29.903 { 00:11:29.903 "dma_device_id": "system", 00:11:29.903 "dma_device_type": 1 00:11:29.903 }, 00:11:29.903 { 00:11:29.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.903 "dma_device_type": 2 00:11:29.903 } 00:11:29.903 ], 00:11:29.903 "driver_specific": {} 00:11:29.903 } 00:11:29.903 ] 00:11:29.903 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.903 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:29.903 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:29.903 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:29.903 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:29.903 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.903 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:29.903 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:29.903 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.903 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:29.903 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.903 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.903 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.903 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.903 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.903 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.903 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.903 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.903 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.903 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.903 "name": "Existed_Raid", 00:11:29.903 "uuid": "9174653b-231f-4c02-badc-816708ee372b", 00:11:29.903 "strip_size_kb": 64, 00:11:29.903 "state": "online", 00:11:29.903 "raid_level": "concat", 00:11:29.903 "superblock": true, 00:11:29.903 "num_base_bdevs": 3, 00:11:29.903 "num_base_bdevs_discovered": 3, 00:11:29.903 "num_base_bdevs_operational": 3, 00:11:29.903 "base_bdevs_list": [ 00:11:29.903 { 00:11:29.903 "name": "BaseBdev1", 00:11:29.903 "uuid": "5cba8f8b-fa8a-4fb8-a901-61c4eb77b9ad", 00:11:29.903 "is_configured": true, 00:11:29.903 "data_offset": 2048, 00:11:29.903 "data_size": 63488 00:11:29.903 }, 00:11:29.903 { 00:11:29.903 "name": "BaseBdev2", 00:11:29.903 "uuid": "ae03ab9c-aee5-454d-9132-3ad8b51a2e24", 00:11:29.903 "is_configured": true, 00:11:29.903 "data_offset": 2048, 00:11:29.903 "data_size": 63488 00:11:29.903 }, 00:11:29.903 { 00:11:29.903 "name": "BaseBdev3", 00:11:29.903 "uuid": "78e2c82e-288c-41d2-975c-c0b35f60f0a7", 00:11:29.903 "is_configured": true, 00:11:29.903 "data_offset": 2048, 00:11:29.903 "data_size": 63488 00:11:29.903 } 00:11:29.903 ] 00:11:29.903 }' 00:11:29.903 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.903 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.162 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:30.162 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:30.162 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:30.162 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:30.162 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:30.162 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:30.162 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:30.162 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.162 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.162 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:30.162 [2024-12-06 16:27:11.952240] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:30.162 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.162 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:30.162 "name": "Existed_Raid", 00:11:30.162 "aliases": [ 00:11:30.162 "9174653b-231f-4c02-badc-816708ee372b" 00:11:30.162 ], 00:11:30.162 "product_name": "Raid Volume", 00:11:30.162 "block_size": 512, 00:11:30.162 "num_blocks": 190464, 00:11:30.162 "uuid": "9174653b-231f-4c02-badc-816708ee372b", 00:11:30.162 "assigned_rate_limits": { 00:11:30.162 "rw_ios_per_sec": 0, 00:11:30.162 "rw_mbytes_per_sec": 0, 00:11:30.162 "r_mbytes_per_sec": 0, 00:11:30.162 "w_mbytes_per_sec": 0 00:11:30.162 }, 00:11:30.162 "claimed": false, 00:11:30.162 "zoned": false, 00:11:30.162 "supported_io_types": { 00:11:30.162 "read": true, 00:11:30.162 "write": true, 00:11:30.162 "unmap": true, 00:11:30.162 "flush": true, 00:11:30.162 "reset": true, 00:11:30.162 "nvme_admin": false, 00:11:30.162 "nvme_io": false, 00:11:30.162 "nvme_io_md": false, 00:11:30.162 "write_zeroes": true, 00:11:30.162 "zcopy": false, 00:11:30.162 "get_zone_info": false, 00:11:30.162 "zone_management": false, 00:11:30.162 "zone_append": false, 00:11:30.162 "compare": false, 00:11:30.162 "compare_and_write": false, 00:11:30.162 "abort": false, 00:11:30.162 "seek_hole": false, 00:11:30.162 "seek_data": false, 00:11:30.162 "copy": false, 00:11:30.162 "nvme_iov_md": false 00:11:30.162 }, 00:11:30.162 "memory_domains": [ 00:11:30.162 { 00:11:30.162 "dma_device_id": "system", 00:11:30.162 "dma_device_type": 1 00:11:30.162 }, 00:11:30.162 { 00:11:30.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.162 "dma_device_type": 2 00:11:30.162 }, 00:11:30.162 { 00:11:30.162 "dma_device_id": "system", 00:11:30.162 "dma_device_type": 1 00:11:30.162 }, 00:11:30.162 { 00:11:30.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.162 "dma_device_type": 2 00:11:30.162 }, 00:11:30.162 { 00:11:30.162 "dma_device_id": "system", 00:11:30.162 "dma_device_type": 1 00:11:30.162 }, 00:11:30.162 { 00:11:30.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.162 "dma_device_type": 2 00:11:30.162 } 00:11:30.162 ], 00:11:30.162 "driver_specific": { 00:11:30.162 "raid": { 00:11:30.162 "uuid": "9174653b-231f-4c02-badc-816708ee372b", 00:11:30.162 "strip_size_kb": 64, 00:11:30.162 "state": "online", 00:11:30.162 "raid_level": "concat", 00:11:30.162 "superblock": true, 00:11:30.162 "num_base_bdevs": 3, 00:11:30.162 "num_base_bdevs_discovered": 3, 00:11:30.162 "num_base_bdevs_operational": 3, 00:11:30.162 "base_bdevs_list": [ 00:11:30.162 { 00:11:30.162 "name": "BaseBdev1", 00:11:30.162 "uuid": "5cba8f8b-fa8a-4fb8-a901-61c4eb77b9ad", 00:11:30.162 "is_configured": true, 00:11:30.162 "data_offset": 2048, 00:11:30.162 "data_size": 63488 00:11:30.162 }, 00:11:30.162 { 00:11:30.162 "name": "BaseBdev2", 00:11:30.162 "uuid": "ae03ab9c-aee5-454d-9132-3ad8b51a2e24", 00:11:30.162 "is_configured": true, 00:11:30.162 "data_offset": 2048, 00:11:30.162 "data_size": 63488 00:11:30.162 }, 00:11:30.162 { 00:11:30.162 "name": "BaseBdev3", 00:11:30.162 "uuid": "78e2c82e-288c-41d2-975c-c0b35f60f0a7", 00:11:30.162 "is_configured": true, 00:11:30.162 "data_offset": 2048, 00:11:30.162 "data_size": 63488 00:11:30.162 } 00:11:30.162 ] 00:11:30.162 } 00:11:30.162 } 00:11:30.162 }' 00:11:30.162 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:30.421 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:30.421 BaseBdev2 00:11:30.421 BaseBdev3' 00:11:30.421 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.421 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:30.421 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.421 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:30.421 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.421 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.421 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.421 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.421 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.421 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.421 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.421 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:30.421 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.421 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.421 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.421 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.421 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.421 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.421 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.421 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:30.421 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.421 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.421 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.421 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.421 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.421 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.421 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:30.421 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.421 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.421 [2024-12-06 16:27:12.255829] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:30.421 [2024-12-06 16:27:12.255910] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:30.421 [2024-12-06 16:27:12.256009] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:30.681 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.681 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:30.681 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:30.681 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:30.681 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:30.681 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:30.681 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:11:30.681 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.681 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:30.681 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:30.681 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:30.681 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:30.681 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.681 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.681 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.681 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.681 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.681 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.681 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.681 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.681 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.681 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.681 "name": "Existed_Raid", 00:11:30.681 "uuid": "9174653b-231f-4c02-badc-816708ee372b", 00:11:30.681 "strip_size_kb": 64, 00:11:30.681 "state": "offline", 00:11:30.681 "raid_level": "concat", 00:11:30.681 "superblock": true, 00:11:30.681 "num_base_bdevs": 3, 00:11:30.681 "num_base_bdevs_discovered": 2, 00:11:30.681 "num_base_bdevs_operational": 2, 00:11:30.681 "base_bdevs_list": [ 00:11:30.682 { 00:11:30.682 "name": null, 00:11:30.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.682 "is_configured": false, 00:11:30.682 "data_offset": 0, 00:11:30.682 "data_size": 63488 00:11:30.682 }, 00:11:30.682 { 00:11:30.682 "name": "BaseBdev2", 00:11:30.682 "uuid": "ae03ab9c-aee5-454d-9132-3ad8b51a2e24", 00:11:30.682 "is_configured": true, 00:11:30.682 "data_offset": 2048, 00:11:30.682 "data_size": 63488 00:11:30.682 }, 00:11:30.682 { 00:11:30.682 "name": "BaseBdev3", 00:11:30.682 "uuid": "78e2c82e-288c-41d2-975c-c0b35f60f0a7", 00:11:30.682 "is_configured": true, 00:11:30.682 "data_offset": 2048, 00:11:30.682 "data_size": 63488 00:11:30.682 } 00:11:30.682 ] 00:11:30.682 }' 00:11:30.682 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.682 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.941 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:30.941 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:30.941 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:30.941 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.941 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.941 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.941 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.941 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:30.941 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:30.941 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:30.941 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.941 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.941 [2024-12-06 16:27:12.731083] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:30.941 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.941 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:30.941 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:30.941 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.941 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.941 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:30.941 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.941 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.201 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:31.201 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:31.201 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:31.201 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.201 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.201 [2024-12-06 16:27:12.802576] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:31.201 [2024-12-06 16:27:12.802690] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:11:31.201 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.201 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:31.201 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:31.201 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.201 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:31.201 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.201 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.201 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.201 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:31.201 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:31.201 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:31.201 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:31.201 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:31.201 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:31.201 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.201 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.201 BaseBdev2 00:11:31.201 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.201 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:31.201 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:31.201 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:31.201 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:31.201 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:31.201 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:31.201 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:31.201 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.201 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.201 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.201 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:31.201 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.201 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.201 [ 00:11:31.201 { 00:11:31.201 "name": "BaseBdev2", 00:11:31.202 "aliases": [ 00:11:31.202 "89baebfc-b0e6-48a4-b802-ccf24183467b" 00:11:31.202 ], 00:11:31.202 "product_name": "Malloc disk", 00:11:31.202 "block_size": 512, 00:11:31.202 "num_blocks": 65536, 00:11:31.202 "uuid": "89baebfc-b0e6-48a4-b802-ccf24183467b", 00:11:31.202 "assigned_rate_limits": { 00:11:31.202 "rw_ios_per_sec": 0, 00:11:31.202 "rw_mbytes_per_sec": 0, 00:11:31.202 "r_mbytes_per_sec": 0, 00:11:31.202 "w_mbytes_per_sec": 0 00:11:31.202 }, 00:11:31.202 "claimed": false, 00:11:31.202 "zoned": false, 00:11:31.202 "supported_io_types": { 00:11:31.202 "read": true, 00:11:31.202 "write": true, 00:11:31.202 "unmap": true, 00:11:31.202 "flush": true, 00:11:31.202 "reset": true, 00:11:31.202 "nvme_admin": false, 00:11:31.202 "nvme_io": false, 00:11:31.202 "nvme_io_md": false, 00:11:31.202 "write_zeroes": true, 00:11:31.202 "zcopy": true, 00:11:31.202 "get_zone_info": false, 00:11:31.202 "zone_management": false, 00:11:31.202 "zone_append": false, 00:11:31.202 "compare": false, 00:11:31.202 "compare_and_write": false, 00:11:31.202 "abort": true, 00:11:31.202 "seek_hole": false, 00:11:31.202 "seek_data": false, 00:11:31.202 "copy": true, 00:11:31.202 "nvme_iov_md": false 00:11:31.202 }, 00:11:31.202 "memory_domains": [ 00:11:31.202 { 00:11:31.202 "dma_device_id": "system", 00:11:31.202 "dma_device_type": 1 00:11:31.202 }, 00:11:31.202 { 00:11:31.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.202 "dma_device_type": 2 00:11:31.202 } 00:11:31.202 ], 00:11:31.202 "driver_specific": {} 00:11:31.202 } 00:11:31.202 ] 00:11:31.202 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.202 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:31.202 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:31.202 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:31.202 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:31.202 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.202 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.202 BaseBdev3 00:11:31.202 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.202 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:31.202 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:31.202 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:31.202 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:31.202 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:31.202 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:31.202 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:31.202 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.202 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.202 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.202 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:31.202 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.202 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.202 [ 00:11:31.202 { 00:11:31.202 "name": "BaseBdev3", 00:11:31.202 "aliases": [ 00:11:31.202 "ab1a6454-f67f-4d4e-8ff0-14e44416c4c4" 00:11:31.202 ], 00:11:31.202 "product_name": "Malloc disk", 00:11:31.202 "block_size": 512, 00:11:31.202 "num_blocks": 65536, 00:11:31.202 "uuid": "ab1a6454-f67f-4d4e-8ff0-14e44416c4c4", 00:11:31.202 "assigned_rate_limits": { 00:11:31.202 "rw_ios_per_sec": 0, 00:11:31.202 "rw_mbytes_per_sec": 0, 00:11:31.202 "r_mbytes_per_sec": 0, 00:11:31.202 "w_mbytes_per_sec": 0 00:11:31.202 }, 00:11:31.202 "claimed": false, 00:11:31.202 "zoned": false, 00:11:31.202 "supported_io_types": { 00:11:31.202 "read": true, 00:11:31.202 "write": true, 00:11:31.202 "unmap": true, 00:11:31.202 "flush": true, 00:11:31.202 "reset": true, 00:11:31.202 "nvme_admin": false, 00:11:31.202 "nvme_io": false, 00:11:31.202 "nvme_io_md": false, 00:11:31.202 "write_zeroes": true, 00:11:31.202 "zcopy": true, 00:11:31.202 "get_zone_info": false, 00:11:31.202 "zone_management": false, 00:11:31.202 "zone_append": false, 00:11:31.202 "compare": false, 00:11:31.202 "compare_and_write": false, 00:11:31.202 "abort": true, 00:11:31.202 "seek_hole": false, 00:11:31.202 "seek_data": false, 00:11:31.202 "copy": true, 00:11:31.202 "nvme_iov_md": false 00:11:31.202 }, 00:11:31.202 "memory_domains": [ 00:11:31.202 { 00:11:31.202 "dma_device_id": "system", 00:11:31.202 "dma_device_type": 1 00:11:31.202 }, 00:11:31.202 { 00:11:31.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.202 "dma_device_type": 2 00:11:31.202 } 00:11:31.202 ], 00:11:31.202 "driver_specific": {} 00:11:31.202 } 00:11:31.202 ] 00:11:31.202 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.202 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:31.202 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:31.202 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:31.202 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:31.202 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.202 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.202 [2024-12-06 16:27:12.985583] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:31.202 [2024-12-06 16:27:12.985676] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:31.202 [2024-12-06 16:27:12.985717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:31.202 [2024-12-06 16:27:12.987571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:31.202 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.202 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:31.202 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.202 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.202 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:31.202 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:31.202 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:31.202 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.202 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.202 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.202 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.202 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.202 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.202 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.202 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.202 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.463 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.463 "name": "Existed_Raid", 00:11:31.463 "uuid": "c252d1fd-e2dc-4d67-8715-6a208265042b", 00:11:31.463 "strip_size_kb": 64, 00:11:31.463 "state": "configuring", 00:11:31.463 "raid_level": "concat", 00:11:31.463 "superblock": true, 00:11:31.463 "num_base_bdevs": 3, 00:11:31.463 "num_base_bdevs_discovered": 2, 00:11:31.463 "num_base_bdevs_operational": 3, 00:11:31.463 "base_bdevs_list": [ 00:11:31.463 { 00:11:31.463 "name": "BaseBdev1", 00:11:31.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.463 "is_configured": false, 00:11:31.463 "data_offset": 0, 00:11:31.463 "data_size": 0 00:11:31.463 }, 00:11:31.463 { 00:11:31.463 "name": "BaseBdev2", 00:11:31.463 "uuid": "89baebfc-b0e6-48a4-b802-ccf24183467b", 00:11:31.463 "is_configured": true, 00:11:31.463 "data_offset": 2048, 00:11:31.463 "data_size": 63488 00:11:31.463 }, 00:11:31.463 { 00:11:31.463 "name": "BaseBdev3", 00:11:31.463 "uuid": "ab1a6454-f67f-4d4e-8ff0-14e44416c4c4", 00:11:31.463 "is_configured": true, 00:11:31.463 "data_offset": 2048, 00:11:31.463 "data_size": 63488 00:11:31.463 } 00:11:31.463 ] 00:11:31.463 }' 00:11:31.463 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.463 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.724 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:31.724 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.724 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.724 [2024-12-06 16:27:13.428869] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:31.724 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.724 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:31.724 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.724 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.724 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:31.724 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:31.724 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:31.724 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.724 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.724 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.724 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.724 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.724 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.724 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.724 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.724 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.724 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.724 "name": "Existed_Raid", 00:11:31.724 "uuid": "c252d1fd-e2dc-4d67-8715-6a208265042b", 00:11:31.724 "strip_size_kb": 64, 00:11:31.724 "state": "configuring", 00:11:31.724 "raid_level": "concat", 00:11:31.724 "superblock": true, 00:11:31.724 "num_base_bdevs": 3, 00:11:31.724 "num_base_bdevs_discovered": 1, 00:11:31.724 "num_base_bdevs_operational": 3, 00:11:31.724 "base_bdevs_list": [ 00:11:31.724 { 00:11:31.724 "name": "BaseBdev1", 00:11:31.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.724 "is_configured": false, 00:11:31.724 "data_offset": 0, 00:11:31.724 "data_size": 0 00:11:31.724 }, 00:11:31.724 { 00:11:31.724 "name": null, 00:11:31.724 "uuid": "89baebfc-b0e6-48a4-b802-ccf24183467b", 00:11:31.724 "is_configured": false, 00:11:31.724 "data_offset": 0, 00:11:31.724 "data_size": 63488 00:11:31.724 }, 00:11:31.724 { 00:11:31.724 "name": "BaseBdev3", 00:11:31.724 "uuid": "ab1a6454-f67f-4d4e-8ff0-14e44416c4c4", 00:11:31.724 "is_configured": true, 00:11:31.724 "data_offset": 2048, 00:11:31.724 "data_size": 63488 00:11:31.724 } 00:11:31.724 ] 00:11:31.724 }' 00:11:31.724 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.724 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.293 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:32.293 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.293 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.293 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.293 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.293 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:32.293 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:32.293 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.293 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.293 [2024-12-06 16:27:13.931290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:32.293 BaseBdev1 00:11:32.293 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.293 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:32.293 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:32.293 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:32.293 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:32.293 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:32.293 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:32.293 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:32.293 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.293 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.293 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.293 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:32.293 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.293 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.293 [ 00:11:32.293 { 00:11:32.293 "name": "BaseBdev1", 00:11:32.293 "aliases": [ 00:11:32.293 "37b0b37e-e8af-41b1-9aef-08cfd05886aa" 00:11:32.293 ], 00:11:32.293 "product_name": "Malloc disk", 00:11:32.293 "block_size": 512, 00:11:32.293 "num_blocks": 65536, 00:11:32.293 "uuid": "37b0b37e-e8af-41b1-9aef-08cfd05886aa", 00:11:32.293 "assigned_rate_limits": { 00:11:32.293 "rw_ios_per_sec": 0, 00:11:32.293 "rw_mbytes_per_sec": 0, 00:11:32.293 "r_mbytes_per_sec": 0, 00:11:32.293 "w_mbytes_per_sec": 0 00:11:32.293 }, 00:11:32.293 "claimed": true, 00:11:32.293 "claim_type": "exclusive_write", 00:11:32.293 "zoned": false, 00:11:32.293 "supported_io_types": { 00:11:32.293 "read": true, 00:11:32.293 "write": true, 00:11:32.293 "unmap": true, 00:11:32.293 "flush": true, 00:11:32.293 "reset": true, 00:11:32.293 "nvme_admin": false, 00:11:32.293 "nvme_io": false, 00:11:32.293 "nvme_io_md": false, 00:11:32.293 "write_zeroes": true, 00:11:32.293 "zcopy": true, 00:11:32.293 "get_zone_info": false, 00:11:32.293 "zone_management": false, 00:11:32.293 "zone_append": false, 00:11:32.293 "compare": false, 00:11:32.293 "compare_and_write": false, 00:11:32.293 "abort": true, 00:11:32.293 "seek_hole": false, 00:11:32.293 "seek_data": false, 00:11:32.293 "copy": true, 00:11:32.293 "nvme_iov_md": false 00:11:32.293 }, 00:11:32.293 "memory_domains": [ 00:11:32.293 { 00:11:32.293 "dma_device_id": "system", 00:11:32.293 "dma_device_type": 1 00:11:32.293 }, 00:11:32.293 { 00:11:32.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.293 "dma_device_type": 2 00:11:32.293 } 00:11:32.293 ], 00:11:32.293 "driver_specific": {} 00:11:32.293 } 00:11:32.293 ] 00:11:32.293 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.293 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:32.293 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:32.293 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.293 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.293 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:32.293 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:32.293 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:32.293 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.293 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.293 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.293 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.293 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.294 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.294 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.294 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.294 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.294 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.294 "name": "Existed_Raid", 00:11:32.294 "uuid": "c252d1fd-e2dc-4d67-8715-6a208265042b", 00:11:32.294 "strip_size_kb": 64, 00:11:32.294 "state": "configuring", 00:11:32.294 "raid_level": "concat", 00:11:32.294 "superblock": true, 00:11:32.294 "num_base_bdevs": 3, 00:11:32.294 "num_base_bdevs_discovered": 2, 00:11:32.294 "num_base_bdevs_operational": 3, 00:11:32.294 "base_bdevs_list": [ 00:11:32.294 { 00:11:32.294 "name": "BaseBdev1", 00:11:32.294 "uuid": "37b0b37e-e8af-41b1-9aef-08cfd05886aa", 00:11:32.294 "is_configured": true, 00:11:32.294 "data_offset": 2048, 00:11:32.294 "data_size": 63488 00:11:32.294 }, 00:11:32.294 { 00:11:32.294 "name": null, 00:11:32.294 "uuid": "89baebfc-b0e6-48a4-b802-ccf24183467b", 00:11:32.294 "is_configured": false, 00:11:32.294 "data_offset": 0, 00:11:32.294 "data_size": 63488 00:11:32.294 }, 00:11:32.294 { 00:11:32.294 "name": "BaseBdev3", 00:11:32.294 "uuid": "ab1a6454-f67f-4d4e-8ff0-14e44416c4c4", 00:11:32.294 "is_configured": true, 00:11:32.294 "data_offset": 2048, 00:11:32.294 "data_size": 63488 00:11:32.294 } 00:11:32.294 ] 00:11:32.294 }' 00:11:32.294 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.294 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.881 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.881 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:32.881 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.881 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.881 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.881 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:32.881 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:32.881 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.881 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.881 [2024-12-06 16:27:14.470477] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:32.881 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.881 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:32.881 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.881 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.881 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:32.881 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:32.881 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:32.881 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.881 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.881 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.881 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.881 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.881 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.881 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.881 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.881 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.881 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.881 "name": "Existed_Raid", 00:11:32.881 "uuid": "c252d1fd-e2dc-4d67-8715-6a208265042b", 00:11:32.881 "strip_size_kb": 64, 00:11:32.881 "state": "configuring", 00:11:32.881 "raid_level": "concat", 00:11:32.881 "superblock": true, 00:11:32.881 "num_base_bdevs": 3, 00:11:32.881 "num_base_bdevs_discovered": 1, 00:11:32.881 "num_base_bdevs_operational": 3, 00:11:32.881 "base_bdevs_list": [ 00:11:32.881 { 00:11:32.881 "name": "BaseBdev1", 00:11:32.881 "uuid": "37b0b37e-e8af-41b1-9aef-08cfd05886aa", 00:11:32.881 "is_configured": true, 00:11:32.881 "data_offset": 2048, 00:11:32.881 "data_size": 63488 00:11:32.881 }, 00:11:32.881 { 00:11:32.881 "name": null, 00:11:32.881 "uuid": "89baebfc-b0e6-48a4-b802-ccf24183467b", 00:11:32.881 "is_configured": false, 00:11:32.881 "data_offset": 0, 00:11:32.881 "data_size": 63488 00:11:32.881 }, 00:11:32.881 { 00:11:32.881 "name": null, 00:11:32.881 "uuid": "ab1a6454-f67f-4d4e-8ff0-14e44416c4c4", 00:11:32.881 "is_configured": false, 00:11:32.881 "data_offset": 0, 00:11:32.881 "data_size": 63488 00:11:32.881 } 00:11:32.881 ] 00:11:32.881 }' 00:11:32.881 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.881 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.142 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.142 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.142 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.142 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:33.142 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.142 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:33.142 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:33.142 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.142 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.142 [2024-12-06 16:27:14.949694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:33.142 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.142 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:33.142 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.142 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.142 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:33.142 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:33.142 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:33.142 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.142 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.142 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.142 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.142 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.142 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.142 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.142 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.414 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.415 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.415 "name": "Existed_Raid", 00:11:33.415 "uuid": "c252d1fd-e2dc-4d67-8715-6a208265042b", 00:11:33.415 "strip_size_kb": 64, 00:11:33.415 "state": "configuring", 00:11:33.415 "raid_level": "concat", 00:11:33.415 "superblock": true, 00:11:33.415 "num_base_bdevs": 3, 00:11:33.415 "num_base_bdevs_discovered": 2, 00:11:33.415 "num_base_bdevs_operational": 3, 00:11:33.415 "base_bdevs_list": [ 00:11:33.415 { 00:11:33.415 "name": "BaseBdev1", 00:11:33.415 "uuid": "37b0b37e-e8af-41b1-9aef-08cfd05886aa", 00:11:33.415 "is_configured": true, 00:11:33.415 "data_offset": 2048, 00:11:33.415 "data_size": 63488 00:11:33.415 }, 00:11:33.415 { 00:11:33.415 "name": null, 00:11:33.415 "uuid": "89baebfc-b0e6-48a4-b802-ccf24183467b", 00:11:33.415 "is_configured": false, 00:11:33.415 "data_offset": 0, 00:11:33.415 "data_size": 63488 00:11:33.415 }, 00:11:33.415 { 00:11:33.415 "name": "BaseBdev3", 00:11:33.415 "uuid": "ab1a6454-f67f-4d4e-8ff0-14e44416c4c4", 00:11:33.415 "is_configured": true, 00:11:33.415 "data_offset": 2048, 00:11:33.415 "data_size": 63488 00:11:33.415 } 00:11:33.415 ] 00:11:33.415 }' 00:11:33.415 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.415 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.700 16:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.700 16:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.700 16:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.700 16:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:33.701 16:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.701 16:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:33.701 16:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:33.701 16:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.701 16:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.701 [2024-12-06 16:27:15.472853] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:33.701 16:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.701 16:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:33.701 16:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.701 16:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.701 16:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:33.701 16:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:33.701 16:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:33.701 16:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.701 16:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.701 16:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.701 16:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.701 16:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.701 16:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.701 16:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.701 16:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.701 16:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.960 16:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.960 "name": "Existed_Raid", 00:11:33.960 "uuid": "c252d1fd-e2dc-4d67-8715-6a208265042b", 00:11:33.960 "strip_size_kb": 64, 00:11:33.960 "state": "configuring", 00:11:33.960 "raid_level": "concat", 00:11:33.960 "superblock": true, 00:11:33.960 "num_base_bdevs": 3, 00:11:33.960 "num_base_bdevs_discovered": 1, 00:11:33.960 "num_base_bdevs_operational": 3, 00:11:33.960 "base_bdevs_list": [ 00:11:33.960 { 00:11:33.960 "name": null, 00:11:33.960 "uuid": "37b0b37e-e8af-41b1-9aef-08cfd05886aa", 00:11:33.960 "is_configured": false, 00:11:33.960 "data_offset": 0, 00:11:33.960 "data_size": 63488 00:11:33.960 }, 00:11:33.960 { 00:11:33.960 "name": null, 00:11:33.960 "uuid": "89baebfc-b0e6-48a4-b802-ccf24183467b", 00:11:33.960 "is_configured": false, 00:11:33.960 "data_offset": 0, 00:11:33.960 "data_size": 63488 00:11:33.960 }, 00:11:33.960 { 00:11:33.960 "name": "BaseBdev3", 00:11:33.960 "uuid": "ab1a6454-f67f-4d4e-8ff0-14e44416c4c4", 00:11:33.960 "is_configured": true, 00:11:33.960 "data_offset": 2048, 00:11:33.960 "data_size": 63488 00:11:33.960 } 00:11:33.960 ] 00:11:33.960 }' 00:11:33.960 16:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.960 16:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.220 16:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.220 16:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.220 16:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.220 16:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:34.220 16:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.220 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:34.220 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:34.220 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.220 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.220 [2024-12-06 16:27:16.018643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:34.220 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.220 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:34.220 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.220 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.220 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:34.220 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:34.220 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:34.220 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.220 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.220 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.220 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.220 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.220 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.220 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.220 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.220 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.480 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.480 "name": "Existed_Raid", 00:11:34.480 "uuid": "c252d1fd-e2dc-4d67-8715-6a208265042b", 00:11:34.480 "strip_size_kb": 64, 00:11:34.480 "state": "configuring", 00:11:34.480 "raid_level": "concat", 00:11:34.480 "superblock": true, 00:11:34.480 "num_base_bdevs": 3, 00:11:34.480 "num_base_bdevs_discovered": 2, 00:11:34.480 "num_base_bdevs_operational": 3, 00:11:34.480 "base_bdevs_list": [ 00:11:34.480 { 00:11:34.480 "name": null, 00:11:34.480 "uuid": "37b0b37e-e8af-41b1-9aef-08cfd05886aa", 00:11:34.480 "is_configured": false, 00:11:34.480 "data_offset": 0, 00:11:34.480 "data_size": 63488 00:11:34.480 }, 00:11:34.480 { 00:11:34.480 "name": "BaseBdev2", 00:11:34.480 "uuid": "89baebfc-b0e6-48a4-b802-ccf24183467b", 00:11:34.480 "is_configured": true, 00:11:34.480 "data_offset": 2048, 00:11:34.480 "data_size": 63488 00:11:34.480 }, 00:11:34.480 { 00:11:34.480 "name": "BaseBdev3", 00:11:34.480 "uuid": "ab1a6454-f67f-4d4e-8ff0-14e44416c4c4", 00:11:34.480 "is_configured": true, 00:11:34.480 "data_offset": 2048, 00:11:34.480 "data_size": 63488 00:11:34.480 } 00:11:34.480 ] 00:11:34.480 }' 00:11:34.480 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.480 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.740 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.740 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:34.740 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.740 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.740 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.740 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:34.740 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.740 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.740 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.740 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:34.740 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.740 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 37b0b37e-e8af-41b1-9aef-08cfd05886aa 00:11:34.740 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.740 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.000 [2024-12-06 16:27:16.585103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:35.000 [2024-12-06 16:27:16.585425] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:11:35.000 [2024-12-06 16:27:16.585487] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:35.000 NewBaseBdev 00:11:35.000 [2024-12-06 16:27:16.585816] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:35.000 [2024-12-06 16:27:16.585957] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:11:35.000 [2024-12-06 16:27:16.585969] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:11:35.000 [2024-12-06 16:27:16.586082] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:35.000 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.000 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:35.000 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:35.000 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:35.000 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:35.000 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:35.000 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:35.000 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:35.000 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.000 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.000 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.000 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:35.000 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.000 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.000 [ 00:11:35.000 { 00:11:35.000 "name": "NewBaseBdev", 00:11:35.000 "aliases": [ 00:11:35.000 "37b0b37e-e8af-41b1-9aef-08cfd05886aa" 00:11:35.000 ], 00:11:35.000 "product_name": "Malloc disk", 00:11:35.000 "block_size": 512, 00:11:35.000 "num_blocks": 65536, 00:11:35.000 "uuid": "37b0b37e-e8af-41b1-9aef-08cfd05886aa", 00:11:35.000 "assigned_rate_limits": { 00:11:35.000 "rw_ios_per_sec": 0, 00:11:35.000 "rw_mbytes_per_sec": 0, 00:11:35.000 "r_mbytes_per_sec": 0, 00:11:35.000 "w_mbytes_per_sec": 0 00:11:35.000 }, 00:11:35.000 "claimed": true, 00:11:35.000 "claim_type": "exclusive_write", 00:11:35.000 "zoned": false, 00:11:35.000 "supported_io_types": { 00:11:35.000 "read": true, 00:11:35.000 "write": true, 00:11:35.000 "unmap": true, 00:11:35.000 "flush": true, 00:11:35.000 "reset": true, 00:11:35.000 "nvme_admin": false, 00:11:35.000 "nvme_io": false, 00:11:35.000 "nvme_io_md": false, 00:11:35.000 "write_zeroes": true, 00:11:35.000 "zcopy": true, 00:11:35.000 "get_zone_info": false, 00:11:35.000 "zone_management": false, 00:11:35.000 "zone_append": false, 00:11:35.000 "compare": false, 00:11:35.000 "compare_and_write": false, 00:11:35.000 "abort": true, 00:11:35.000 "seek_hole": false, 00:11:35.000 "seek_data": false, 00:11:35.000 "copy": true, 00:11:35.000 "nvme_iov_md": false 00:11:35.000 }, 00:11:35.000 "memory_domains": [ 00:11:35.000 { 00:11:35.000 "dma_device_id": "system", 00:11:35.000 "dma_device_type": 1 00:11:35.000 }, 00:11:35.000 { 00:11:35.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.000 "dma_device_type": 2 00:11:35.000 } 00:11:35.000 ], 00:11:35.000 "driver_specific": {} 00:11:35.000 } 00:11:35.000 ] 00:11:35.000 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.000 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:35.000 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:35.000 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.000 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:35.000 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:35.000 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.001 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:35.001 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.001 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.001 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.001 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.001 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.001 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.001 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.001 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.001 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.001 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.001 "name": "Existed_Raid", 00:11:35.001 "uuid": "c252d1fd-e2dc-4d67-8715-6a208265042b", 00:11:35.001 "strip_size_kb": 64, 00:11:35.001 "state": "online", 00:11:35.001 "raid_level": "concat", 00:11:35.001 "superblock": true, 00:11:35.001 "num_base_bdevs": 3, 00:11:35.001 "num_base_bdevs_discovered": 3, 00:11:35.001 "num_base_bdevs_operational": 3, 00:11:35.001 "base_bdevs_list": [ 00:11:35.001 { 00:11:35.001 "name": "NewBaseBdev", 00:11:35.001 "uuid": "37b0b37e-e8af-41b1-9aef-08cfd05886aa", 00:11:35.001 "is_configured": true, 00:11:35.001 "data_offset": 2048, 00:11:35.001 "data_size": 63488 00:11:35.001 }, 00:11:35.001 { 00:11:35.001 "name": "BaseBdev2", 00:11:35.001 "uuid": "89baebfc-b0e6-48a4-b802-ccf24183467b", 00:11:35.001 "is_configured": true, 00:11:35.001 "data_offset": 2048, 00:11:35.001 "data_size": 63488 00:11:35.001 }, 00:11:35.001 { 00:11:35.001 "name": "BaseBdev3", 00:11:35.001 "uuid": "ab1a6454-f67f-4d4e-8ff0-14e44416c4c4", 00:11:35.001 "is_configured": true, 00:11:35.001 "data_offset": 2048, 00:11:35.001 "data_size": 63488 00:11:35.001 } 00:11:35.001 ] 00:11:35.001 }' 00:11:35.001 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.001 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.261 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:35.261 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:35.261 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:35.261 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:35.261 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:35.261 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:35.261 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:35.261 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:35.261 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.261 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.261 [2024-12-06 16:27:17.060708] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:35.261 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.261 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:35.261 "name": "Existed_Raid", 00:11:35.261 "aliases": [ 00:11:35.261 "c252d1fd-e2dc-4d67-8715-6a208265042b" 00:11:35.261 ], 00:11:35.261 "product_name": "Raid Volume", 00:11:35.261 "block_size": 512, 00:11:35.261 "num_blocks": 190464, 00:11:35.261 "uuid": "c252d1fd-e2dc-4d67-8715-6a208265042b", 00:11:35.261 "assigned_rate_limits": { 00:11:35.261 "rw_ios_per_sec": 0, 00:11:35.261 "rw_mbytes_per_sec": 0, 00:11:35.261 "r_mbytes_per_sec": 0, 00:11:35.261 "w_mbytes_per_sec": 0 00:11:35.261 }, 00:11:35.261 "claimed": false, 00:11:35.261 "zoned": false, 00:11:35.261 "supported_io_types": { 00:11:35.261 "read": true, 00:11:35.261 "write": true, 00:11:35.261 "unmap": true, 00:11:35.261 "flush": true, 00:11:35.261 "reset": true, 00:11:35.261 "nvme_admin": false, 00:11:35.261 "nvme_io": false, 00:11:35.261 "nvme_io_md": false, 00:11:35.261 "write_zeroes": true, 00:11:35.261 "zcopy": false, 00:11:35.261 "get_zone_info": false, 00:11:35.261 "zone_management": false, 00:11:35.261 "zone_append": false, 00:11:35.261 "compare": false, 00:11:35.261 "compare_and_write": false, 00:11:35.261 "abort": false, 00:11:35.261 "seek_hole": false, 00:11:35.261 "seek_data": false, 00:11:35.261 "copy": false, 00:11:35.261 "nvme_iov_md": false 00:11:35.261 }, 00:11:35.261 "memory_domains": [ 00:11:35.261 { 00:11:35.261 "dma_device_id": "system", 00:11:35.261 "dma_device_type": 1 00:11:35.261 }, 00:11:35.261 { 00:11:35.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.261 "dma_device_type": 2 00:11:35.261 }, 00:11:35.261 { 00:11:35.261 "dma_device_id": "system", 00:11:35.261 "dma_device_type": 1 00:11:35.261 }, 00:11:35.261 { 00:11:35.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.261 "dma_device_type": 2 00:11:35.261 }, 00:11:35.261 { 00:11:35.261 "dma_device_id": "system", 00:11:35.261 "dma_device_type": 1 00:11:35.261 }, 00:11:35.261 { 00:11:35.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.261 "dma_device_type": 2 00:11:35.261 } 00:11:35.261 ], 00:11:35.261 "driver_specific": { 00:11:35.261 "raid": { 00:11:35.261 "uuid": "c252d1fd-e2dc-4d67-8715-6a208265042b", 00:11:35.261 "strip_size_kb": 64, 00:11:35.261 "state": "online", 00:11:35.261 "raid_level": "concat", 00:11:35.261 "superblock": true, 00:11:35.261 "num_base_bdevs": 3, 00:11:35.261 "num_base_bdevs_discovered": 3, 00:11:35.261 "num_base_bdevs_operational": 3, 00:11:35.261 "base_bdevs_list": [ 00:11:35.261 { 00:11:35.261 "name": "NewBaseBdev", 00:11:35.261 "uuid": "37b0b37e-e8af-41b1-9aef-08cfd05886aa", 00:11:35.261 "is_configured": true, 00:11:35.261 "data_offset": 2048, 00:11:35.261 "data_size": 63488 00:11:35.261 }, 00:11:35.261 { 00:11:35.261 "name": "BaseBdev2", 00:11:35.261 "uuid": "89baebfc-b0e6-48a4-b802-ccf24183467b", 00:11:35.261 "is_configured": true, 00:11:35.261 "data_offset": 2048, 00:11:35.261 "data_size": 63488 00:11:35.261 }, 00:11:35.261 { 00:11:35.261 "name": "BaseBdev3", 00:11:35.261 "uuid": "ab1a6454-f67f-4d4e-8ff0-14e44416c4c4", 00:11:35.261 "is_configured": true, 00:11:35.261 "data_offset": 2048, 00:11:35.261 "data_size": 63488 00:11:35.261 } 00:11:35.261 ] 00:11:35.261 } 00:11:35.261 } 00:11:35.261 }' 00:11:35.261 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:35.521 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:35.521 BaseBdev2 00:11:35.521 BaseBdev3' 00:11:35.521 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.521 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:35.521 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.521 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:35.521 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.521 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.521 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.522 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.522 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.522 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.522 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.522 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:35.522 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.522 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.522 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.522 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.522 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.522 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.522 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.522 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:35.522 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.522 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.522 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.522 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.522 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.522 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.522 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:35.522 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.522 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.522 [2024-12-06 16:27:17.347895] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:35.522 [2024-12-06 16:27:17.348003] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:35.522 [2024-12-06 16:27:17.348117] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:35.522 [2024-12-06 16:27:17.348193] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:35.522 [2024-12-06 16:27:17.348278] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:11:35.522 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.522 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 77725 00:11:35.522 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 77725 ']' 00:11:35.522 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 77725 00:11:35.522 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:35.782 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:35.782 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77725 00:11:35.782 killing process with pid 77725 00:11:35.782 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:35.782 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:35.782 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77725' 00:11:35.782 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 77725 00:11:35.782 [2024-12-06 16:27:17.398798] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:35.782 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 77725 00:11:35.782 [2024-12-06 16:27:17.431961] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:36.042 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:36.042 00:11:36.042 real 0m9.139s 00:11:36.042 user 0m15.627s 00:11:36.042 sys 0m1.882s 00:11:36.042 ************************************ 00:11:36.042 END TEST raid_state_function_test_sb 00:11:36.042 ************************************ 00:11:36.042 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:36.042 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.042 16:27:17 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:11:36.042 16:27:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:36.042 16:27:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:36.042 16:27:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:36.042 ************************************ 00:11:36.042 START TEST raid_superblock_test 00:11:36.042 ************************************ 00:11:36.042 16:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:11:36.042 16:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:36.042 16:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:11:36.042 16:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:36.042 16:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:36.042 16:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:36.042 16:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:36.042 16:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:36.042 16:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:36.042 16:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:36.042 16:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:36.042 16:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:36.042 16:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:36.042 16:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:36.042 16:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:36.042 16:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:36.042 16:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:36.042 16:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=78334 00:11:36.042 16:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:36.042 16:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 78334 00:11:36.042 16:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 78334 ']' 00:11:36.042 16:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.042 16:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:36.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.042 16:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.042 16:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:36.042 16:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.042 [2024-12-06 16:27:17.816390] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:11:36.042 [2024-12-06 16:27:17.816645] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78334 ] 00:11:36.302 [2024-12-06 16:27:17.988816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.302 [2024-12-06 16:27:18.015501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.302 [2024-12-06 16:27:18.058841] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:36.302 [2024-12-06 16:27:18.058880] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:36.868 16:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:36.868 16:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:36.869 16:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:36.869 16:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:36.869 16:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:36.869 16:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:36.869 16:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:36.869 16:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:36.869 16:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:36.869 16:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:36.869 16:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:36.869 16:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.869 16:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.869 malloc1 00:11:36.869 16:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.869 16:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:36.869 16:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.869 16:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.869 [2024-12-06 16:27:18.703280] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:36.869 [2024-12-06 16:27:18.703452] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.869 [2024-12-06 16:27:18.703503] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:36.869 [2024-12-06 16:27:18.703547] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.869 [2024-12-06 16:27:18.706010] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.869 [2024-12-06 16:27:18.706101] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:37.127 pt1 00:11:37.127 16:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.127 16:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:37.127 16:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:37.127 16:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:37.127 16:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:37.127 16:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:37.127 16:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:37.127 16:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:37.127 16:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:37.127 16:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:37.127 16:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.127 16:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.127 malloc2 00:11:37.127 16:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.127 16:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:37.127 16:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.127 16:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.127 [2024-12-06 16:27:18.728705] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:37.127 [2024-12-06 16:27:18.728891] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:37.127 [2024-12-06 16:27:18.728931] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:37.127 [2024-12-06 16:27:18.728965] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:37.127 [2024-12-06 16:27:18.731399] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:37.127 [2024-12-06 16:27:18.731508] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:37.127 pt2 00:11:37.127 16:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.127 16:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:37.127 16:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:37.127 16:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:37.127 16:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:37.127 16:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:37.127 16:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:37.127 16:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:37.127 16:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:37.127 16:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:37.128 16:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.128 16:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.128 malloc3 00:11:37.128 16:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.128 16:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:37.128 16:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.128 16:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.128 [2024-12-06 16:27:18.757682] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:37.128 [2024-12-06 16:27:18.757849] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:37.128 [2024-12-06 16:27:18.757896] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:37.128 [2024-12-06 16:27:18.757941] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:37.128 [2024-12-06 16:27:18.760554] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:37.128 [2024-12-06 16:27:18.760647] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:37.128 pt3 00:11:37.128 16:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.128 16:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:37.128 16:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:37.128 16:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:11:37.128 16:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.128 16:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.128 [2024-12-06 16:27:18.769683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:37.128 [2024-12-06 16:27:18.771570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:37.128 [2024-12-06 16:27:18.771722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:37.128 [2024-12-06 16:27:18.771928] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:11:37.128 [2024-12-06 16:27:18.772015] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:37.128 [2024-12-06 16:27:18.772379] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:11:37.128 [2024-12-06 16:27:18.772562] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:11:37.128 [2024-12-06 16:27:18.772609] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:11:37.128 [2024-12-06 16:27:18.772801] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:37.128 16:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.128 16:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:37.128 16:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:37.128 16:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:37.128 16:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:37.128 16:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:37.128 16:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:37.128 16:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.128 16:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.128 16:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.128 16:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.128 16:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.128 16:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.128 16:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.128 16:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.128 16:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.128 16:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.128 "name": "raid_bdev1", 00:11:37.128 "uuid": "acfe2160-0cf0-431c-b0fe-3db8f5f750ee", 00:11:37.128 "strip_size_kb": 64, 00:11:37.128 "state": "online", 00:11:37.128 "raid_level": "concat", 00:11:37.128 "superblock": true, 00:11:37.128 "num_base_bdevs": 3, 00:11:37.128 "num_base_bdevs_discovered": 3, 00:11:37.128 "num_base_bdevs_operational": 3, 00:11:37.128 "base_bdevs_list": [ 00:11:37.128 { 00:11:37.128 "name": "pt1", 00:11:37.128 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:37.128 "is_configured": true, 00:11:37.128 "data_offset": 2048, 00:11:37.128 "data_size": 63488 00:11:37.128 }, 00:11:37.128 { 00:11:37.128 "name": "pt2", 00:11:37.128 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:37.128 "is_configured": true, 00:11:37.128 "data_offset": 2048, 00:11:37.128 "data_size": 63488 00:11:37.128 }, 00:11:37.128 { 00:11:37.128 "name": "pt3", 00:11:37.128 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:37.128 "is_configured": true, 00:11:37.128 "data_offset": 2048, 00:11:37.128 "data_size": 63488 00:11:37.128 } 00:11:37.128 ] 00:11:37.128 }' 00:11:37.128 16:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.128 16:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.697 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:37.697 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:37.697 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:37.697 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:37.697 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:37.697 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:37.697 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:37.697 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:37.697 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.697 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.697 [2024-12-06 16:27:19.289147] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:37.697 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.697 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:37.697 "name": "raid_bdev1", 00:11:37.697 "aliases": [ 00:11:37.697 "acfe2160-0cf0-431c-b0fe-3db8f5f750ee" 00:11:37.697 ], 00:11:37.697 "product_name": "Raid Volume", 00:11:37.697 "block_size": 512, 00:11:37.697 "num_blocks": 190464, 00:11:37.697 "uuid": "acfe2160-0cf0-431c-b0fe-3db8f5f750ee", 00:11:37.697 "assigned_rate_limits": { 00:11:37.698 "rw_ios_per_sec": 0, 00:11:37.698 "rw_mbytes_per_sec": 0, 00:11:37.698 "r_mbytes_per_sec": 0, 00:11:37.698 "w_mbytes_per_sec": 0 00:11:37.698 }, 00:11:37.698 "claimed": false, 00:11:37.698 "zoned": false, 00:11:37.698 "supported_io_types": { 00:11:37.698 "read": true, 00:11:37.698 "write": true, 00:11:37.698 "unmap": true, 00:11:37.698 "flush": true, 00:11:37.698 "reset": true, 00:11:37.698 "nvme_admin": false, 00:11:37.698 "nvme_io": false, 00:11:37.698 "nvme_io_md": false, 00:11:37.698 "write_zeroes": true, 00:11:37.698 "zcopy": false, 00:11:37.698 "get_zone_info": false, 00:11:37.698 "zone_management": false, 00:11:37.698 "zone_append": false, 00:11:37.698 "compare": false, 00:11:37.698 "compare_and_write": false, 00:11:37.698 "abort": false, 00:11:37.698 "seek_hole": false, 00:11:37.698 "seek_data": false, 00:11:37.698 "copy": false, 00:11:37.698 "nvme_iov_md": false 00:11:37.698 }, 00:11:37.698 "memory_domains": [ 00:11:37.698 { 00:11:37.698 "dma_device_id": "system", 00:11:37.698 "dma_device_type": 1 00:11:37.698 }, 00:11:37.698 { 00:11:37.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.698 "dma_device_type": 2 00:11:37.698 }, 00:11:37.698 { 00:11:37.698 "dma_device_id": "system", 00:11:37.698 "dma_device_type": 1 00:11:37.698 }, 00:11:37.698 { 00:11:37.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.698 "dma_device_type": 2 00:11:37.698 }, 00:11:37.698 { 00:11:37.698 "dma_device_id": "system", 00:11:37.698 "dma_device_type": 1 00:11:37.698 }, 00:11:37.698 { 00:11:37.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.698 "dma_device_type": 2 00:11:37.698 } 00:11:37.698 ], 00:11:37.698 "driver_specific": { 00:11:37.698 "raid": { 00:11:37.698 "uuid": "acfe2160-0cf0-431c-b0fe-3db8f5f750ee", 00:11:37.698 "strip_size_kb": 64, 00:11:37.698 "state": "online", 00:11:37.698 "raid_level": "concat", 00:11:37.698 "superblock": true, 00:11:37.698 "num_base_bdevs": 3, 00:11:37.698 "num_base_bdevs_discovered": 3, 00:11:37.698 "num_base_bdevs_operational": 3, 00:11:37.698 "base_bdevs_list": [ 00:11:37.698 { 00:11:37.698 "name": "pt1", 00:11:37.698 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:37.698 "is_configured": true, 00:11:37.698 "data_offset": 2048, 00:11:37.698 "data_size": 63488 00:11:37.698 }, 00:11:37.698 { 00:11:37.698 "name": "pt2", 00:11:37.698 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:37.698 "is_configured": true, 00:11:37.698 "data_offset": 2048, 00:11:37.698 "data_size": 63488 00:11:37.698 }, 00:11:37.698 { 00:11:37.698 "name": "pt3", 00:11:37.698 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:37.698 "is_configured": true, 00:11:37.698 "data_offset": 2048, 00:11:37.698 "data_size": 63488 00:11:37.698 } 00:11:37.698 ] 00:11:37.698 } 00:11:37.698 } 00:11:37.698 }' 00:11:37.698 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:37.698 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:37.698 pt2 00:11:37.698 pt3' 00:11:37.698 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:37.698 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:37.698 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:37.698 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:37.698 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:37.698 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.698 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.698 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.698 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:37.698 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:37.698 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:37.698 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:37.698 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:37.698 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.698 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.698 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.698 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:37.698 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:37.698 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:37.698 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:37.698 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.698 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.698 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:37.698 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.698 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:37.698 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:37.698 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:37.698 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:37.957 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.957 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.957 [2024-12-06 16:27:19.544705] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:37.957 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.957 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=acfe2160-0cf0-431c-b0fe-3db8f5f750ee 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z acfe2160-0cf0-431c-b0fe-3db8f5f750ee ']' 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.958 [2024-12-06 16:27:19.584315] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:37.958 [2024-12-06 16:27:19.584410] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:37.958 [2024-12-06 16:27:19.584552] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:37.958 [2024-12-06 16:27:19.584655] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:37.958 [2024-12-06 16:27:19.584712] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.958 [2024-12-06 16:27:19.744020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:37.958 [2024-12-06 16:27:19.746062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:37.958 [2024-12-06 16:27:19.746155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:37.958 [2024-12-06 16:27:19.746238] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:37.958 [2024-12-06 16:27:19.746345] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:37.958 [2024-12-06 16:27:19.746405] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:37.958 [2024-12-06 16:27:19.746482] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:37.958 [2024-12-06 16:27:19.746530] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:11:37.958 request: 00:11:37.958 { 00:11:37.958 "name": "raid_bdev1", 00:11:37.958 "raid_level": "concat", 00:11:37.958 "base_bdevs": [ 00:11:37.958 "malloc1", 00:11:37.958 "malloc2", 00:11:37.958 "malloc3" 00:11:37.958 ], 00:11:37.958 "strip_size_kb": 64, 00:11:37.958 "superblock": false, 00:11:37.958 "method": "bdev_raid_create", 00:11:37.958 "req_id": 1 00:11:37.958 } 00:11:37.958 Got JSON-RPC error response 00:11:37.958 response: 00:11:37.958 { 00:11:37.958 "code": -17, 00:11:37.958 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:37.958 } 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:37.958 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.238 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:38.238 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:38.238 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:38.238 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.238 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.238 [2024-12-06 16:27:19.811896] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:38.238 [2024-12-06 16:27:19.812029] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.238 [2024-12-06 16:27:19.812076] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:38.238 [2024-12-06 16:27:19.812112] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.238 [2024-12-06 16:27:19.814563] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.238 [2024-12-06 16:27:19.814659] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:38.238 [2024-12-06 16:27:19.814780] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:38.238 [2024-12-06 16:27:19.814880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:38.238 pt1 00:11:38.238 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.238 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:11:38.238 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:38.238 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.238 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:38.238 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:38.238 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:38.238 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.238 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.238 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.238 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.238 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.238 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.238 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.238 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.238 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.238 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.238 "name": "raid_bdev1", 00:11:38.238 "uuid": "acfe2160-0cf0-431c-b0fe-3db8f5f750ee", 00:11:38.238 "strip_size_kb": 64, 00:11:38.238 "state": "configuring", 00:11:38.238 "raid_level": "concat", 00:11:38.238 "superblock": true, 00:11:38.238 "num_base_bdevs": 3, 00:11:38.238 "num_base_bdevs_discovered": 1, 00:11:38.238 "num_base_bdevs_operational": 3, 00:11:38.238 "base_bdevs_list": [ 00:11:38.238 { 00:11:38.238 "name": "pt1", 00:11:38.238 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:38.238 "is_configured": true, 00:11:38.238 "data_offset": 2048, 00:11:38.238 "data_size": 63488 00:11:38.238 }, 00:11:38.238 { 00:11:38.238 "name": null, 00:11:38.238 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:38.238 "is_configured": false, 00:11:38.238 "data_offset": 2048, 00:11:38.238 "data_size": 63488 00:11:38.238 }, 00:11:38.238 { 00:11:38.238 "name": null, 00:11:38.238 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:38.238 "is_configured": false, 00:11:38.238 "data_offset": 2048, 00:11:38.238 "data_size": 63488 00:11:38.238 } 00:11:38.238 ] 00:11:38.238 }' 00:11:38.238 16:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.238 16:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.498 16:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:11:38.498 16:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:38.498 16:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.498 16:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.498 [2024-12-06 16:27:20.263216] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:38.498 [2024-12-06 16:27:20.263336] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.498 [2024-12-06 16:27:20.263377] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:38.498 [2024-12-06 16:27:20.263414] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.498 [2024-12-06 16:27:20.263908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.498 [2024-12-06 16:27:20.263976] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:38.498 [2024-12-06 16:27:20.264085] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:38.498 [2024-12-06 16:27:20.264154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:38.498 pt2 00:11:38.498 16:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.498 16:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:38.498 16:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.498 16:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.498 [2024-12-06 16:27:20.275155] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:38.498 16:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.498 16:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:11:38.498 16:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:38.498 16:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.498 16:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:38.498 16:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:38.498 16:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:38.498 16:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.498 16:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.498 16:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.498 16:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.498 16:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.498 16:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.498 16:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.498 16:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.498 16:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.757 16:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.757 "name": "raid_bdev1", 00:11:38.757 "uuid": "acfe2160-0cf0-431c-b0fe-3db8f5f750ee", 00:11:38.757 "strip_size_kb": 64, 00:11:38.757 "state": "configuring", 00:11:38.757 "raid_level": "concat", 00:11:38.757 "superblock": true, 00:11:38.757 "num_base_bdevs": 3, 00:11:38.757 "num_base_bdevs_discovered": 1, 00:11:38.757 "num_base_bdevs_operational": 3, 00:11:38.757 "base_bdevs_list": [ 00:11:38.757 { 00:11:38.757 "name": "pt1", 00:11:38.757 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:38.757 "is_configured": true, 00:11:38.757 "data_offset": 2048, 00:11:38.757 "data_size": 63488 00:11:38.757 }, 00:11:38.757 { 00:11:38.757 "name": null, 00:11:38.757 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:38.757 "is_configured": false, 00:11:38.757 "data_offset": 0, 00:11:38.757 "data_size": 63488 00:11:38.757 }, 00:11:38.757 { 00:11:38.757 "name": null, 00:11:38.757 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:38.757 "is_configured": false, 00:11:38.757 "data_offset": 2048, 00:11:38.757 "data_size": 63488 00:11:38.757 } 00:11:38.757 ] 00:11:38.757 }' 00:11:38.757 16:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.757 16:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.016 16:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:39.016 16:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:39.016 16:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:39.016 16:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.016 16:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.016 [2024-12-06 16:27:20.774363] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:39.016 [2024-12-06 16:27:20.774496] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.016 [2024-12-06 16:27:20.774544] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:39.016 [2024-12-06 16:27:20.774576] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.016 [2024-12-06 16:27:20.775055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.016 [2024-12-06 16:27:20.775118] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:39.016 [2024-12-06 16:27:20.775240] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:39.016 [2024-12-06 16:27:20.775297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:39.016 pt2 00:11:39.016 16:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.016 16:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:39.016 16:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:39.016 16:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:39.016 16:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.016 16:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.016 [2024-12-06 16:27:20.782319] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:39.016 [2024-12-06 16:27:20.782398] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.016 [2024-12-06 16:27:20.782434] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:39.016 [2024-12-06 16:27:20.782461] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.016 [2024-12-06 16:27:20.782813] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.016 [2024-12-06 16:27:20.782868] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:39.016 [2024-12-06 16:27:20.782955] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:39.016 [2024-12-06 16:27:20.783001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:39.016 [2024-12-06 16:27:20.783120] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:11:39.016 [2024-12-06 16:27:20.783156] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:39.016 [2024-12-06 16:27:20.783428] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:39.016 [2024-12-06 16:27:20.783574] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:11:39.016 [2024-12-06 16:27:20.783615] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:11:39.016 [2024-12-06 16:27:20.783783] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:39.016 pt3 00:11:39.016 16:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.016 16:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:39.016 16:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:39.016 16:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:39.016 16:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:39.016 16:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:39.016 16:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:39.016 16:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.016 16:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:39.016 16:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.016 16:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.016 16:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.016 16:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.016 16:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.016 16:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.016 16:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.016 16:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.016 16:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.016 16:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.016 "name": "raid_bdev1", 00:11:39.016 "uuid": "acfe2160-0cf0-431c-b0fe-3db8f5f750ee", 00:11:39.016 "strip_size_kb": 64, 00:11:39.016 "state": "online", 00:11:39.016 "raid_level": "concat", 00:11:39.016 "superblock": true, 00:11:39.016 "num_base_bdevs": 3, 00:11:39.016 "num_base_bdevs_discovered": 3, 00:11:39.016 "num_base_bdevs_operational": 3, 00:11:39.016 "base_bdevs_list": [ 00:11:39.016 { 00:11:39.016 "name": "pt1", 00:11:39.016 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:39.016 "is_configured": true, 00:11:39.016 "data_offset": 2048, 00:11:39.016 "data_size": 63488 00:11:39.016 }, 00:11:39.016 { 00:11:39.016 "name": "pt2", 00:11:39.016 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:39.016 "is_configured": true, 00:11:39.017 "data_offset": 2048, 00:11:39.017 "data_size": 63488 00:11:39.017 }, 00:11:39.017 { 00:11:39.017 "name": "pt3", 00:11:39.017 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:39.017 "is_configured": true, 00:11:39.017 "data_offset": 2048, 00:11:39.017 "data_size": 63488 00:11:39.017 } 00:11:39.017 ] 00:11:39.017 }' 00:11:39.017 16:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.017 16:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.583 16:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:39.583 16:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:39.583 16:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:39.583 16:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:39.583 16:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:39.583 16:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:39.583 16:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:39.583 16:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:39.583 16:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.583 16:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.583 [2024-12-06 16:27:21.261849] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:39.583 16:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.584 16:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:39.584 "name": "raid_bdev1", 00:11:39.584 "aliases": [ 00:11:39.584 "acfe2160-0cf0-431c-b0fe-3db8f5f750ee" 00:11:39.584 ], 00:11:39.584 "product_name": "Raid Volume", 00:11:39.584 "block_size": 512, 00:11:39.584 "num_blocks": 190464, 00:11:39.584 "uuid": "acfe2160-0cf0-431c-b0fe-3db8f5f750ee", 00:11:39.584 "assigned_rate_limits": { 00:11:39.584 "rw_ios_per_sec": 0, 00:11:39.584 "rw_mbytes_per_sec": 0, 00:11:39.584 "r_mbytes_per_sec": 0, 00:11:39.584 "w_mbytes_per_sec": 0 00:11:39.584 }, 00:11:39.584 "claimed": false, 00:11:39.584 "zoned": false, 00:11:39.584 "supported_io_types": { 00:11:39.584 "read": true, 00:11:39.584 "write": true, 00:11:39.584 "unmap": true, 00:11:39.584 "flush": true, 00:11:39.584 "reset": true, 00:11:39.584 "nvme_admin": false, 00:11:39.584 "nvme_io": false, 00:11:39.584 "nvme_io_md": false, 00:11:39.584 "write_zeroes": true, 00:11:39.584 "zcopy": false, 00:11:39.584 "get_zone_info": false, 00:11:39.584 "zone_management": false, 00:11:39.584 "zone_append": false, 00:11:39.584 "compare": false, 00:11:39.584 "compare_and_write": false, 00:11:39.584 "abort": false, 00:11:39.584 "seek_hole": false, 00:11:39.584 "seek_data": false, 00:11:39.584 "copy": false, 00:11:39.584 "nvme_iov_md": false 00:11:39.584 }, 00:11:39.584 "memory_domains": [ 00:11:39.584 { 00:11:39.584 "dma_device_id": "system", 00:11:39.584 "dma_device_type": 1 00:11:39.584 }, 00:11:39.584 { 00:11:39.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.584 "dma_device_type": 2 00:11:39.584 }, 00:11:39.584 { 00:11:39.584 "dma_device_id": "system", 00:11:39.584 "dma_device_type": 1 00:11:39.584 }, 00:11:39.584 { 00:11:39.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.584 "dma_device_type": 2 00:11:39.584 }, 00:11:39.584 { 00:11:39.584 "dma_device_id": "system", 00:11:39.584 "dma_device_type": 1 00:11:39.584 }, 00:11:39.584 { 00:11:39.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.584 "dma_device_type": 2 00:11:39.584 } 00:11:39.584 ], 00:11:39.584 "driver_specific": { 00:11:39.584 "raid": { 00:11:39.584 "uuid": "acfe2160-0cf0-431c-b0fe-3db8f5f750ee", 00:11:39.584 "strip_size_kb": 64, 00:11:39.584 "state": "online", 00:11:39.584 "raid_level": "concat", 00:11:39.584 "superblock": true, 00:11:39.584 "num_base_bdevs": 3, 00:11:39.584 "num_base_bdevs_discovered": 3, 00:11:39.584 "num_base_bdevs_operational": 3, 00:11:39.584 "base_bdevs_list": [ 00:11:39.584 { 00:11:39.584 "name": "pt1", 00:11:39.584 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:39.584 "is_configured": true, 00:11:39.584 "data_offset": 2048, 00:11:39.584 "data_size": 63488 00:11:39.584 }, 00:11:39.584 { 00:11:39.584 "name": "pt2", 00:11:39.584 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:39.584 "is_configured": true, 00:11:39.584 "data_offset": 2048, 00:11:39.584 "data_size": 63488 00:11:39.584 }, 00:11:39.584 { 00:11:39.584 "name": "pt3", 00:11:39.584 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:39.584 "is_configured": true, 00:11:39.584 "data_offset": 2048, 00:11:39.584 "data_size": 63488 00:11:39.584 } 00:11:39.584 ] 00:11:39.584 } 00:11:39.584 } 00:11:39.584 }' 00:11:39.584 16:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:39.584 16:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:39.584 pt2 00:11:39.584 pt3' 00:11:39.584 16:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.584 16:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:39.584 16:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:39.584 16:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:39.584 16:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.584 16:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.584 16:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.584 16:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.843 16:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:39.843 16:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:39.843 16:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:39.843 16:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:39.843 16:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.843 16:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.843 16:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.843 16:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.843 16:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:39.843 16:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:39.843 16:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:39.843 16:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:39.843 16:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.844 16:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.844 16:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.844 16:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.844 16:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:39.844 16:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:39.844 16:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:39.844 16:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.844 16:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.844 16:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:39.844 [2024-12-06 16:27:21.545384] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:39.844 16:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.844 16:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' acfe2160-0cf0-431c-b0fe-3db8f5f750ee '!=' acfe2160-0cf0-431c-b0fe-3db8f5f750ee ']' 00:11:39.844 16:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:39.844 16:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:39.844 16:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:39.844 16:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 78334 00:11:39.844 16:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 78334 ']' 00:11:39.844 16:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 78334 00:11:39.844 16:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:39.844 16:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:39.844 16:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78334 00:11:39.844 killing process with pid 78334 00:11:39.844 16:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:39.844 16:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:39.844 16:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78334' 00:11:39.844 16:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 78334 00:11:39.844 [2024-12-06 16:27:21.630765] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:39.844 [2024-12-06 16:27:21.630856] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:39.844 [2024-12-06 16:27:21.630921] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:39.844 [2024-12-06 16:27:21.630931] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:11:39.844 16:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 78334 00:11:39.844 [2024-12-06 16:27:21.665356] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:40.103 16:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:40.103 00:11:40.103 real 0m4.164s 00:11:40.103 user 0m6.601s 00:11:40.103 sys 0m0.913s 00:11:40.103 ************************************ 00:11:40.103 END TEST raid_superblock_test 00:11:40.103 ************************************ 00:11:40.103 16:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:40.103 16:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.362 16:27:21 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:11:40.362 16:27:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:40.362 16:27:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:40.362 16:27:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:40.362 ************************************ 00:11:40.362 START TEST raid_read_error_test 00:11:40.362 ************************************ 00:11:40.362 16:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:11:40.362 16:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:40.362 16:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:40.362 16:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:40.362 16:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:40.362 16:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:40.362 16:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:40.362 16:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:40.362 16:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:40.362 16:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:40.362 16:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:40.362 16:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:40.362 16:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:40.362 16:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:40.362 16:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:40.362 16:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:40.362 16:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:40.362 16:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:40.362 16:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:40.362 16:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:40.362 16:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:40.362 16:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:40.362 16:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:40.362 16:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:40.362 16:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:40.362 16:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:40.362 16:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.lWMxRki8BU 00:11:40.362 16:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78576 00:11:40.362 16:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78576 00:11:40.362 16:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:40.362 16:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 78576 ']' 00:11:40.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.362 16:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.362 16:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:40.362 16:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.362 16:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:40.362 16:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.362 [2024-12-06 16:27:22.067225] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:11:40.362 [2024-12-06 16:27:22.067376] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78576 ] 00:11:40.621 [2024-12-06 16:27:22.238618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.621 [2024-12-06 16:27:22.267993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.621 [2024-12-06 16:27:22.311111] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:40.621 [2024-12-06 16:27:22.311164] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:41.190 16:27:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:41.190 16:27:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:41.190 16:27:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:41.190 16:27:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:41.190 16:27:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.190 16:27:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.190 BaseBdev1_malloc 00:11:41.190 16:27:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.190 16:27:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:41.190 16:27:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.190 16:27:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.190 true 00:11:41.190 16:27:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.190 16:27:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:41.190 16:27:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.190 16:27:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.190 [2024-12-06 16:27:22.963131] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:41.190 [2024-12-06 16:27:22.963335] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.190 [2024-12-06 16:27:22.963389] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:41.190 [2024-12-06 16:27:22.963428] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.190 [2024-12-06 16:27:22.965978] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.190 [2024-12-06 16:27:22.966021] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:41.190 BaseBdev1 00:11:41.190 16:27:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.190 16:27:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:41.190 16:27:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:41.190 16:27:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.190 16:27:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.190 BaseBdev2_malloc 00:11:41.190 16:27:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.190 16:27:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:41.190 16:27:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.190 16:27:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.190 true 00:11:41.190 16:27:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.190 16:27:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:41.191 16:27:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.191 16:27:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.191 [2024-12-06 16:27:23.004389] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:41.191 [2024-12-06 16:27:23.004554] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.191 [2024-12-06 16:27:23.004602] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:41.191 [2024-12-06 16:27:23.004664] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.191 [2024-12-06 16:27:23.007145] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.191 [2024-12-06 16:27:23.007246] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:41.191 BaseBdev2 00:11:41.191 16:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.191 16:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:41.191 16:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:41.191 16:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.191 16:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.191 BaseBdev3_malloc 00:11:41.191 16:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.451 16:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:41.451 16:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.451 16:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.451 true 00:11:41.451 16:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.451 16:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:41.451 16:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.451 16:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.451 [2024-12-06 16:27:23.045580] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:41.451 [2024-12-06 16:27:23.045712] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.451 [2024-12-06 16:27:23.045755] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:41.451 [2024-12-06 16:27:23.045783] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.451 [2024-12-06 16:27:23.048269] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.451 [2024-12-06 16:27:23.048356] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:41.451 BaseBdev3 00:11:41.451 16:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.451 16:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:41.451 16:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.451 16:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.451 [2024-12-06 16:27:23.057600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:41.451 [2024-12-06 16:27:23.059559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:41.451 [2024-12-06 16:27:23.059720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:41.451 [2024-12-06 16:27:23.059924] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:11:41.451 [2024-12-06 16:27:23.059947] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:41.451 [2024-12-06 16:27:23.060262] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:41.451 [2024-12-06 16:27:23.060423] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:11:41.451 [2024-12-06 16:27:23.060441] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:11:41.451 [2024-12-06 16:27:23.060609] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:41.451 16:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.451 16:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:41.451 16:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:41.451 16:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:41.451 16:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:41.451 16:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.451 16:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:41.451 16:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.451 16:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.451 16:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.451 16:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.451 16:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.451 16:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.451 16:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.451 16:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.451 16:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.451 16:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.451 "name": "raid_bdev1", 00:11:41.451 "uuid": "af2f44a5-2462-457f-a00a-aee294c6197c", 00:11:41.451 "strip_size_kb": 64, 00:11:41.451 "state": "online", 00:11:41.451 "raid_level": "concat", 00:11:41.451 "superblock": true, 00:11:41.451 "num_base_bdevs": 3, 00:11:41.451 "num_base_bdevs_discovered": 3, 00:11:41.451 "num_base_bdevs_operational": 3, 00:11:41.451 "base_bdevs_list": [ 00:11:41.451 { 00:11:41.451 "name": "BaseBdev1", 00:11:41.451 "uuid": "17e21893-dc82-5905-84da-c22747290b69", 00:11:41.451 "is_configured": true, 00:11:41.451 "data_offset": 2048, 00:11:41.451 "data_size": 63488 00:11:41.451 }, 00:11:41.451 { 00:11:41.451 "name": "BaseBdev2", 00:11:41.451 "uuid": "4225f063-f45c-5c16-b04d-b00695b6fd3b", 00:11:41.451 "is_configured": true, 00:11:41.451 "data_offset": 2048, 00:11:41.451 "data_size": 63488 00:11:41.451 }, 00:11:41.451 { 00:11:41.451 "name": "BaseBdev3", 00:11:41.451 "uuid": "4a702172-3252-5e9a-910d-25d8502213f7", 00:11:41.451 "is_configured": true, 00:11:41.451 "data_offset": 2048, 00:11:41.451 "data_size": 63488 00:11:41.451 } 00:11:41.451 ] 00:11:41.451 }' 00:11:41.451 16:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.451 16:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.711 16:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:41.711 16:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:41.970 [2024-12-06 16:27:23.629000] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:42.909 16:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:42.910 16:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.910 16:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.910 16:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.910 16:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:42.910 16:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:42.910 16:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:42.910 16:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:42.910 16:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:42.910 16:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:42.910 16:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:42.910 16:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:42.910 16:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:42.910 16:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.910 16:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.910 16:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.910 16:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.910 16:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.910 16:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.910 16:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.910 16:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.910 16:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.910 16:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.910 "name": "raid_bdev1", 00:11:42.910 "uuid": "af2f44a5-2462-457f-a00a-aee294c6197c", 00:11:42.910 "strip_size_kb": 64, 00:11:42.910 "state": "online", 00:11:42.910 "raid_level": "concat", 00:11:42.910 "superblock": true, 00:11:42.910 "num_base_bdevs": 3, 00:11:42.910 "num_base_bdevs_discovered": 3, 00:11:42.910 "num_base_bdevs_operational": 3, 00:11:42.910 "base_bdevs_list": [ 00:11:42.910 { 00:11:42.910 "name": "BaseBdev1", 00:11:42.910 "uuid": "17e21893-dc82-5905-84da-c22747290b69", 00:11:42.910 "is_configured": true, 00:11:42.910 "data_offset": 2048, 00:11:42.910 "data_size": 63488 00:11:42.910 }, 00:11:42.910 { 00:11:42.910 "name": "BaseBdev2", 00:11:42.910 "uuid": "4225f063-f45c-5c16-b04d-b00695b6fd3b", 00:11:42.910 "is_configured": true, 00:11:42.910 "data_offset": 2048, 00:11:42.910 "data_size": 63488 00:11:42.910 }, 00:11:42.910 { 00:11:42.910 "name": "BaseBdev3", 00:11:42.910 "uuid": "4a702172-3252-5e9a-910d-25d8502213f7", 00:11:42.910 "is_configured": true, 00:11:42.910 "data_offset": 2048, 00:11:42.910 "data_size": 63488 00:11:42.910 } 00:11:42.910 ] 00:11:42.910 }' 00:11:42.910 16:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.910 16:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.168 16:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:43.168 16:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.168 16:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.168 [2024-12-06 16:27:24.973602] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:43.168 [2024-12-06 16:27:24.973733] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:43.168 [2024-12-06 16:27:24.976667] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:43.168 [2024-12-06 16:27:24.976778] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:43.168 [2024-12-06 16:27:24.976858] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:43.169 [2024-12-06 16:27:24.976913] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:11:43.169 { 00:11:43.169 "results": [ 00:11:43.169 { 00:11:43.169 "job": "raid_bdev1", 00:11:43.169 "core_mask": "0x1", 00:11:43.169 "workload": "randrw", 00:11:43.169 "percentage": 50, 00:11:43.169 "status": "finished", 00:11:43.169 "queue_depth": 1, 00:11:43.169 "io_size": 131072, 00:11:43.169 "runtime": 1.34534, 00:11:43.169 "iops": 15199.131817979098, 00:11:43.169 "mibps": 1899.8914772473872, 00:11:43.169 "io_failed": 1, 00:11:43.169 "io_timeout": 0, 00:11:43.169 "avg_latency_us": 90.88951740841684, 00:11:43.169 "min_latency_us": 27.165065502183406, 00:11:43.169 "max_latency_us": 1581.1633187772925 00:11:43.169 } 00:11:43.169 ], 00:11:43.169 "core_count": 1 00:11:43.169 } 00:11:43.169 16:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.169 16:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78576 00:11:43.169 16:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 78576 ']' 00:11:43.169 16:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 78576 00:11:43.169 16:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:43.169 16:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:43.169 16:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78576 00:11:43.427 16:27:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:43.427 16:27:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:43.427 16:27:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78576' 00:11:43.427 killing process with pid 78576 00:11:43.428 16:27:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 78576 00:11:43.428 [2024-12-06 16:27:25.022969] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:43.428 16:27:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 78576 00:11:43.428 [2024-12-06 16:27:25.049750] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:43.686 16:27:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:43.686 16:27:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.lWMxRki8BU 00:11:43.686 16:27:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:43.686 ************************************ 00:11:43.686 END TEST raid_read_error_test 00:11:43.686 ************************************ 00:11:43.687 16:27:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:11:43.687 16:27:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:43.687 16:27:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:43.687 16:27:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:43.687 16:27:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:11:43.687 00:11:43.687 real 0m3.318s 00:11:43.687 user 0m4.239s 00:11:43.687 sys 0m0.543s 00:11:43.687 16:27:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:43.687 16:27:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.687 16:27:25 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:11:43.687 16:27:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:43.687 16:27:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:43.687 16:27:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:43.687 ************************************ 00:11:43.687 START TEST raid_write_error_test 00:11:43.687 ************************************ 00:11:43.687 16:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:11:43.687 16:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:43.687 16:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:43.687 16:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:43.687 16:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:43.687 16:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:43.687 16:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:43.687 16:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:43.687 16:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:43.687 16:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:43.687 16:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:43.687 16:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:43.687 16:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:43.687 16:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:43.687 16:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:43.687 16:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:43.687 16:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:43.687 16:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:43.687 16:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:43.687 16:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:43.687 16:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:43.687 16:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:43.687 16:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:43.687 16:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:43.687 16:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:43.687 16:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:43.687 16:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.wjhYZHxWDb 00:11:43.687 16:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78705 00:11:43.687 16:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:43.687 16:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78705 00:11:43.687 16:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 78705 ']' 00:11:43.687 16:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.687 16:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:43.687 16:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.687 16:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:43.687 16:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.687 [2024-12-06 16:27:25.489915] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:11:43.687 [2024-12-06 16:27:25.490123] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78705 ] 00:11:43.946 [2024-12-06 16:27:25.656085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.946 [2024-12-06 16:27:25.686550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.946 [2024-12-06 16:27:25.730025] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:43.946 [2024-12-06 16:27:25.730064] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:44.886 16:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:44.886 16:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:44.886 16:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:44.886 16:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:44.886 16:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.886 16:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.886 BaseBdev1_malloc 00:11:44.886 16:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.886 16:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:44.886 16:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.886 16:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.886 true 00:11:44.886 16:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.886 16:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:44.886 16:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.886 16:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.886 [2024-12-06 16:27:26.430715] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:44.886 [2024-12-06 16:27:26.430874] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.886 [2024-12-06 16:27:26.430952] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:44.886 [2024-12-06 16:27:26.430997] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.886 [2024-12-06 16:27:26.433532] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.886 [2024-12-06 16:27:26.433617] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:44.886 BaseBdev1 00:11:44.886 16:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.886 16:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:44.886 16:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:44.886 16:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.886 16:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.886 BaseBdev2_malloc 00:11:44.886 16:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.886 16:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:44.886 16:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.886 16:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.886 true 00:11:44.886 16:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.886 16:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:44.886 16:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.886 16:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.886 [2024-12-06 16:27:26.471500] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:44.886 [2024-12-06 16:27:26.471644] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.886 [2024-12-06 16:27:26.471699] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:44.886 [2024-12-06 16:27:26.471743] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.887 [2024-12-06 16:27:26.474068] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.887 [2024-12-06 16:27:26.474153] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:44.887 BaseBdev2 00:11:44.887 16:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.887 16:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:44.887 16:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:44.887 16:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.887 16:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.887 BaseBdev3_malloc 00:11:44.887 16:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.887 16:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:44.887 16:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.887 16:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.887 true 00:11:44.887 16:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.887 16:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:44.887 16:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.887 16:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.887 [2024-12-06 16:27:26.512576] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:44.887 [2024-12-06 16:27:26.512709] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.887 [2024-12-06 16:27:26.512753] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:44.887 [2024-12-06 16:27:26.512809] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.887 [2024-12-06 16:27:26.515105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.887 [2024-12-06 16:27:26.515187] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:44.887 BaseBdev3 00:11:44.887 16:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.887 16:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:44.887 16:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.887 16:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.887 [2024-12-06 16:27:26.524670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:44.887 [2024-12-06 16:27:26.526901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:44.887 [2024-12-06 16:27:26.527042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:44.887 [2024-12-06 16:27:26.527294] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:11:44.887 [2024-12-06 16:27:26.527322] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:44.887 [2024-12-06 16:27:26.527644] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:44.887 [2024-12-06 16:27:26.527807] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:11:44.887 [2024-12-06 16:27:26.527817] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:11:44.887 [2024-12-06 16:27:26.528000] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:44.887 16:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.887 16:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:44.887 16:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:44.887 16:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:44.887 16:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:44.887 16:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:44.887 16:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:44.887 16:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.887 16:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.887 16:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.887 16:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.887 16:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.887 16:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.887 16:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.887 16:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:44.887 16:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.887 16:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.887 "name": "raid_bdev1", 00:11:44.887 "uuid": "a56aadcf-ca3f-4096-9b43-b8b59ed8d673", 00:11:44.887 "strip_size_kb": 64, 00:11:44.887 "state": "online", 00:11:44.887 "raid_level": "concat", 00:11:44.887 "superblock": true, 00:11:44.887 "num_base_bdevs": 3, 00:11:44.887 "num_base_bdevs_discovered": 3, 00:11:44.887 "num_base_bdevs_operational": 3, 00:11:44.887 "base_bdevs_list": [ 00:11:44.887 { 00:11:44.887 "name": "BaseBdev1", 00:11:44.887 "uuid": "b580945b-72c9-5dc9-acb2-38747dd2f9f5", 00:11:44.887 "is_configured": true, 00:11:44.887 "data_offset": 2048, 00:11:44.887 "data_size": 63488 00:11:44.887 }, 00:11:44.887 { 00:11:44.887 "name": "BaseBdev2", 00:11:44.887 "uuid": "d7364f46-b03d-5d70-9f4f-5f9312d37d24", 00:11:44.887 "is_configured": true, 00:11:44.887 "data_offset": 2048, 00:11:44.887 "data_size": 63488 00:11:44.887 }, 00:11:44.887 { 00:11:44.887 "name": "BaseBdev3", 00:11:44.887 "uuid": "7c5e76dd-89d0-5257-839f-98a209eb6e9c", 00:11:44.887 "is_configured": true, 00:11:44.887 "data_offset": 2048, 00:11:44.887 "data_size": 63488 00:11:44.887 } 00:11:44.887 ] 00:11:44.887 }' 00:11:44.887 16:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.887 16:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.474 16:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:45.474 16:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:45.474 [2024-12-06 16:27:27.096350] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:46.413 16:27:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:46.413 16:27:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.413 16:27:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.413 16:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.413 16:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:46.413 16:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:46.413 16:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:46.413 16:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:46.413 16:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:46.413 16:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:46.413 16:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:46.413 16:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:46.413 16:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:46.413 16:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.413 16:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.413 16:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.414 16:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.414 16:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.414 16:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.414 16:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.414 16:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.414 16:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.414 16:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.414 "name": "raid_bdev1", 00:11:46.414 "uuid": "a56aadcf-ca3f-4096-9b43-b8b59ed8d673", 00:11:46.414 "strip_size_kb": 64, 00:11:46.414 "state": "online", 00:11:46.414 "raid_level": "concat", 00:11:46.414 "superblock": true, 00:11:46.414 "num_base_bdevs": 3, 00:11:46.414 "num_base_bdevs_discovered": 3, 00:11:46.414 "num_base_bdevs_operational": 3, 00:11:46.414 "base_bdevs_list": [ 00:11:46.414 { 00:11:46.414 "name": "BaseBdev1", 00:11:46.414 "uuid": "b580945b-72c9-5dc9-acb2-38747dd2f9f5", 00:11:46.414 "is_configured": true, 00:11:46.414 "data_offset": 2048, 00:11:46.414 "data_size": 63488 00:11:46.414 }, 00:11:46.414 { 00:11:46.414 "name": "BaseBdev2", 00:11:46.414 "uuid": "d7364f46-b03d-5d70-9f4f-5f9312d37d24", 00:11:46.414 "is_configured": true, 00:11:46.414 "data_offset": 2048, 00:11:46.414 "data_size": 63488 00:11:46.414 }, 00:11:46.414 { 00:11:46.414 "name": "BaseBdev3", 00:11:46.414 "uuid": "7c5e76dd-89d0-5257-839f-98a209eb6e9c", 00:11:46.414 "is_configured": true, 00:11:46.414 "data_offset": 2048, 00:11:46.414 "data_size": 63488 00:11:46.414 } 00:11:46.414 ] 00:11:46.414 }' 00:11:46.414 16:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.414 16:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.673 16:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:46.673 16:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.673 16:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.673 [2024-12-06 16:27:28.453049] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:46.673 [2024-12-06 16:27:28.453194] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:46.673 [2024-12-06 16:27:28.456304] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:46.673 [2024-12-06 16:27:28.456407] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:46.673 [2024-12-06 16:27:28.456454] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:46.673 [2024-12-06 16:27:28.456467] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:11:46.673 { 00:11:46.673 "results": [ 00:11:46.673 { 00:11:46.673 "job": "raid_bdev1", 00:11:46.673 "core_mask": "0x1", 00:11:46.673 "workload": "randrw", 00:11:46.673 "percentage": 50, 00:11:46.673 "status": "finished", 00:11:46.673 "queue_depth": 1, 00:11:46.673 "io_size": 131072, 00:11:46.673 "runtime": 1.357078, 00:11:46.673 "iops": 14417.004770543772, 00:11:46.673 "mibps": 1802.1255963179715, 00:11:46.673 "io_failed": 1, 00:11:46.673 "io_timeout": 0, 00:11:46.673 "avg_latency_us": 95.87478001898846, 00:11:46.673 "min_latency_us": 27.94759825327511, 00:11:46.673 "max_latency_us": 1488.1537117903931 00:11:46.673 } 00:11:46.673 ], 00:11:46.673 "core_count": 1 00:11:46.673 } 00:11:46.673 16:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.673 16:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78705 00:11:46.673 16:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 78705 ']' 00:11:46.673 16:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 78705 00:11:46.673 16:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:46.673 16:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:46.673 16:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78705 00:11:46.673 16:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:46.673 killing process with pid 78705 00:11:46.673 16:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:46.673 16:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78705' 00:11:46.673 16:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 78705 00:11:46.673 [2024-12-06 16:27:28.506969] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:46.673 16:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 78705 00:11:46.932 [2024-12-06 16:27:28.534337] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:46.932 16:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.wjhYZHxWDb 00:11:46.932 16:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:46.932 16:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:46.932 16:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:11:46.932 16:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:46.932 ************************************ 00:11:46.932 END TEST raid_write_error_test 00:11:46.932 ************************************ 00:11:46.932 16:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:46.932 16:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:46.932 16:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:11:46.932 00:11:46.932 real 0m3.406s 00:11:46.932 user 0m4.381s 00:11:46.932 sys 0m0.574s 00:11:46.932 16:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:46.932 16:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.191 16:27:28 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:47.191 16:27:28 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:11:47.191 16:27:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:47.191 16:27:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:47.191 16:27:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:47.191 ************************************ 00:11:47.191 START TEST raid_state_function_test 00:11:47.191 ************************************ 00:11:47.191 16:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:11:47.191 16:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:47.191 16:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:47.191 16:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:47.191 16:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:47.191 16:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:47.191 16:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:47.191 16:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:47.191 16:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:47.191 16:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:47.191 16:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:47.191 16:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:47.191 16:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:47.191 16:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:47.191 16:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:47.191 16:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:47.191 16:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:47.191 16:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:47.191 16:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:47.191 16:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:47.191 16:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:47.191 16:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:47.191 16:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:47.191 16:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:47.191 16:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:47.191 16:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:47.191 16:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=78838 00:11:47.191 16:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:47.191 16:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78838' 00:11:47.191 Process raid pid: 78838 00:11:47.191 16:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 78838 00:11:47.191 16:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 78838 ']' 00:11:47.191 16:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.191 16:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:47.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.191 16:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.191 16:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:47.191 16:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.191 [2024-12-06 16:27:28.925697] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:11:47.191 [2024-12-06 16:27:28.925830] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:47.457 [2024-12-06 16:27:29.102662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.457 [2024-12-06 16:27:29.129968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.457 [2024-12-06 16:27:29.173148] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:47.457 [2024-12-06 16:27:29.173193] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:48.026 16:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:48.026 16:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:48.026 16:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:48.026 16:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.026 16:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.026 [2024-12-06 16:27:29.788587] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:48.026 [2024-12-06 16:27:29.788723] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:48.026 [2024-12-06 16:27:29.788764] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:48.026 [2024-12-06 16:27:29.788792] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:48.026 [2024-12-06 16:27:29.788814] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:48.026 [2024-12-06 16:27:29.788840] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:48.026 16:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.026 16:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:48.026 16:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.026 16:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:48.026 16:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.026 16:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.026 16:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:48.026 16:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.026 16:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.026 16:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.026 16:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.026 16:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.026 16:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.026 16:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.026 16:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.026 16:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.026 16:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.026 "name": "Existed_Raid", 00:11:48.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.026 "strip_size_kb": 0, 00:11:48.026 "state": "configuring", 00:11:48.026 "raid_level": "raid1", 00:11:48.026 "superblock": false, 00:11:48.026 "num_base_bdevs": 3, 00:11:48.026 "num_base_bdevs_discovered": 0, 00:11:48.026 "num_base_bdevs_operational": 3, 00:11:48.026 "base_bdevs_list": [ 00:11:48.026 { 00:11:48.026 "name": "BaseBdev1", 00:11:48.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.026 "is_configured": false, 00:11:48.026 "data_offset": 0, 00:11:48.026 "data_size": 0 00:11:48.026 }, 00:11:48.026 { 00:11:48.026 "name": "BaseBdev2", 00:11:48.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.026 "is_configured": false, 00:11:48.026 "data_offset": 0, 00:11:48.026 "data_size": 0 00:11:48.026 }, 00:11:48.026 { 00:11:48.026 "name": "BaseBdev3", 00:11:48.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.026 "is_configured": false, 00:11:48.026 "data_offset": 0, 00:11:48.026 "data_size": 0 00:11:48.026 } 00:11:48.026 ] 00:11:48.026 }' 00:11:48.026 16:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.026 16:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.595 16:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:48.595 16:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.595 16:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.595 [2024-12-06 16:27:30.267779] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:48.595 [2024-12-06 16:27:30.267871] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:11:48.595 16:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.595 16:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:48.595 16:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.595 16:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.595 [2024-12-06 16:27:30.275780] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:48.595 [2024-12-06 16:27:30.275869] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:48.595 [2024-12-06 16:27:30.275885] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:48.595 [2024-12-06 16:27:30.275896] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:48.595 [2024-12-06 16:27:30.275903] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:48.595 [2024-12-06 16:27:30.275913] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:48.595 16:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.595 16:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:48.595 16:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.595 16:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.595 BaseBdev1 00:11:48.595 [2024-12-06 16:27:30.297290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:48.595 16:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.595 16:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:48.595 16:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:48.595 16:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:48.595 16:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:48.595 16:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:48.595 16:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:48.595 16:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:48.595 16:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.595 16:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.595 16:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.595 16:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:48.595 16:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.595 16:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.595 [ 00:11:48.595 { 00:11:48.595 "name": "BaseBdev1", 00:11:48.595 "aliases": [ 00:11:48.595 "7124c123-a110-463d-a48e-38e47f3ceabe" 00:11:48.595 ], 00:11:48.595 "product_name": "Malloc disk", 00:11:48.595 "block_size": 512, 00:11:48.595 "num_blocks": 65536, 00:11:48.595 "uuid": "7124c123-a110-463d-a48e-38e47f3ceabe", 00:11:48.595 "assigned_rate_limits": { 00:11:48.595 "rw_ios_per_sec": 0, 00:11:48.595 "rw_mbytes_per_sec": 0, 00:11:48.595 "r_mbytes_per_sec": 0, 00:11:48.595 "w_mbytes_per_sec": 0 00:11:48.595 }, 00:11:48.595 "claimed": true, 00:11:48.595 "claim_type": "exclusive_write", 00:11:48.595 "zoned": false, 00:11:48.596 "supported_io_types": { 00:11:48.596 "read": true, 00:11:48.596 "write": true, 00:11:48.596 "unmap": true, 00:11:48.596 "flush": true, 00:11:48.596 "reset": true, 00:11:48.596 "nvme_admin": false, 00:11:48.596 "nvme_io": false, 00:11:48.596 "nvme_io_md": false, 00:11:48.596 "write_zeroes": true, 00:11:48.596 "zcopy": true, 00:11:48.596 "get_zone_info": false, 00:11:48.596 "zone_management": false, 00:11:48.596 "zone_append": false, 00:11:48.596 "compare": false, 00:11:48.596 "compare_and_write": false, 00:11:48.596 "abort": true, 00:11:48.596 "seek_hole": false, 00:11:48.596 "seek_data": false, 00:11:48.596 "copy": true, 00:11:48.596 "nvme_iov_md": false 00:11:48.596 }, 00:11:48.596 "memory_domains": [ 00:11:48.596 { 00:11:48.596 "dma_device_id": "system", 00:11:48.596 "dma_device_type": 1 00:11:48.596 }, 00:11:48.596 { 00:11:48.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.596 "dma_device_type": 2 00:11:48.596 } 00:11:48.596 ], 00:11:48.596 "driver_specific": {} 00:11:48.596 } 00:11:48.596 ] 00:11:48.596 16:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.596 16:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:48.596 16:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:48.596 16:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.596 16:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:48.596 16:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.596 16:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.596 16:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:48.596 16:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.596 16:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.596 16:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.596 16:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.596 16:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.596 16:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.596 16:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.596 16:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.596 16:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.596 16:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.596 "name": "Existed_Raid", 00:11:48.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.596 "strip_size_kb": 0, 00:11:48.596 "state": "configuring", 00:11:48.596 "raid_level": "raid1", 00:11:48.596 "superblock": false, 00:11:48.596 "num_base_bdevs": 3, 00:11:48.596 "num_base_bdevs_discovered": 1, 00:11:48.596 "num_base_bdevs_operational": 3, 00:11:48.596 "base_bdevs_list": [ 00:11:48.596 { 00:11:48.596 "name": "BaseBdev1", 00:11:48.596 "uuid": "7124c123-a110-463d-a48e-38e47f3ceabe", 00:11:48.596 "is_configured": true, 00:11:48.596 "data_offset": 0, 00:11:48.596 "data_size": 65536 00:11:48.596 }, 00:11:48.596 { 00:11:48.596 "name": "BaseBdev2", 00:11:48.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.596 "is_configured": false, 00:11:48.596 "data_offset": 0, 00:11:48.596 "data_size": 0 00:11:48.596 }, 00:11:48.596 { 00:11:48.596 "name": "BaseBdev3", 00:11:48.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.596 "is_configured": false, 00:11:48.596 "data_offset": 0, 00:11:48.596 "data_size": 0 00:11:48.596 } 00:11:48.596 ] 00:11:48.596 }' 00:11:48.596 16:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.596 16:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.166 16:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:49.166 16:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.166 16:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.166 [2024-12-06 16:27:30.792540] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:49.166 [2024-12-06 16:27:30.792713] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:11:49.166 16:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.166 16:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:49.166 16:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.166 16:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.166 [2024-12-06 16:27:30.804540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:49.166 [2024-12-06 16:27:30.806725] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:49.166 [2024-12-06 16:27:30.806817] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:49.166 [2024-12-06 16:27:30.806855] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:49.166 [2024-12-06 16:27:30.806884] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:49.166 16:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.166 16:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:49.166 16:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:49.166 16:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:49.166 16:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:49.166 16:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:49.166 16:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:49.166 16:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:49.166 16:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:49.166 16:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.166 16:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.166 16:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.166 16:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.166 16:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.166 16:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.166 16:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.166 16:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.166 16:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.166 16:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.166 "name": "Existed_Raid", 00:11:49.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.166 "strip_size_kb": 0, 00:11:49.166 "state": "configuring", 00:11:49.166 "raid_level": "raid1", 00:11:49.166 "superblock": false, 00:11:49.166 "num_base_bdevs": 3, 00:11:49.166 "num_base_bdevs_discovered": 1, 00:11:49.166 "num_base_bdevs_operational": 3, 00:11:49.166 "base_bdevs_list": [ 00:11:49.166 { 00:11:49.166 "name": "BaseBdev1", 00:11:49.166 "uuid": "7124c123-a110-463d-a48e-38e47f3ceabe", 00:11:49.166 "is_configured": true, 00:11:49.166 "data_offset": 0, 00:11:49.166 "data_size": 65536 00:11:49.166 }, 00:11:49.166 { 00:11:49.166 "name": "BaseBdev2", 00:11:49.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.166 "is_configured": false, 00:11:49.166 "data_offset": 0, 00:11:49.166 "data_size": 0 00:11:49.166 }, 00:11:49.166 { 00:11:49.166 "name": "BaseBdev3", 00:11:49.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.166 "is_configured": false, 00:11:49.166 "data_offset": 0, 00:11:49.166 "data_size": 0 00:11:49.166 } 00:11:49.166 ] 00:11:49.166 }' 00:11:49.166 16:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.166 16:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.426 16:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:49.426 16:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.426 16:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.426 [2024-12-06 16:27:31.255138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:49.426 BaseBdev2 00:11:49.426 16:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.426 16:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:49.426 16:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:49.426 16:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:49.426 16:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:49.426 16:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:49.426 16:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:49.426 16:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:49.426 16:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.426 16:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.686 16:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.686 16:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:49.686 16:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.686 16:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.686 [ 00:11:49.686 { 00:11:49.686 "name": "BaseBdev2", 00:11:49.686 "aliases": [ 00:11:49.686 "619afd81-162a-4220-8a84-33dea991ff85" 00:11:49.686 ], 00:11:49.686 "product_name": "Malloc disk", 00:11:49.686 "block_size": 512, 00:11:49.686 "num_blocks": 65536, 00:11:49.686 "uuid": "619afd81-162a-4220-8a84-33dea991ff85", 00:11:49.686 "assigned_rate_limits": { 00:11:49.686 "rw_ios_per_sec": 0, 00:11:49.686 "rw_mbytes_per_sec": 0, 00:11:49.686 "r_mbytes_per_sec": 0, 00:11:49.686 "w_mbytes_per_sec": 0 00:11:49.686 }, 00:11:49.686 "claimed": true, 00:11:49.686 "claim_type": "exclusive_write", 00:11:49.686 "zoned": false, 00:11:49.686 "supported_io_types": { 00:11:49.686 "read": true, 00:11:49.686 "write": true, 00:11:49.686 "unmap": true, 00:11:49.686 "flush": true, 00:11:49.686 "reset": true, 00:11:49.686 "nvme_admin": false, 00:11:49.686 "nvme_io": false, 00:11:49.686 "nvme_io_md": false, 00:11:49.686 "write_zeroes": true, 00:11:49.686 "zcopy": true, 00:11:49.686 "get_zone_info": false, 00:11:49.686 "zone_management": false, 00:11:49.686 "zone_append": false, 00:11:49.686 "compare": false, 00:11:49.686 "compare_and_write": false, 00:11:49.686 "abort": true, 00:11:49.686 "seek_hole": false, 00:11:49.686 "seek_data": false, 00:11:49.686 "copy": true, 00:11:49.686 "nvme_iov_md": false 00:11:49.686 }, 00:11:49.686 "memory_domains": [ 00:11:49.686 { 00:11:49.686 "dma_device_id": "system", 00:11:49.686 "dma_device_type": 1 00:11:49.686 }, 00:11:49.686 { 00:11:49.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.686 "dma_device_type": 2 00:11:49.686 } 00:11:49.686 ], 00:11:49.686 "driver_specific": {} 00:11:49.686 } 00:11:49.686 ] 00:11:49.686 16:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.686 16:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:49.686 16:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:49.686 16:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:49.686 16:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:49.686 16:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:49.686 16:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:49.686 16:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:49.686 16:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:49.686 16:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:49.686 16:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.686 16:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.686 16:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.686 16:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.686 16:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.686 16:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.686 16:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.686 16:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.686 16:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.686 16:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.686 "name": "Existed_Raid", 00:11:49.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.686 "strip_size_kb": 0, 00:11:49.686 "state": "configuring", 00:11:49.686 "raid_level": "raid1", 00:11:49.686 "superblock": false, 00:11:49.686 "num_base_bdevs": 3, 00:11:49.686 "num_base_bdevs_discovered": 2, 00:11:49.686 "num_base_bdevs_operational": 3, 00:11:49.686 "base_bdevs_list": [ 00:11:49.686 { 00:11:49.686 "name": "BaseBdev1", 00:11:49.686 "uuid": "7124c123-a110-463d-a48e-38e47f3ceabe", 00:11:49.686 "is_configured": true, 00:11:49.687 "data_offset": 0, 00:11:49.687 "data_size": 65536 00:11:49.687 }, 00:11:49.687 { 00:11:49.687 "name": "BaseBdev2", 00:11:49.687 "uuid": "619afd81-162a-4220-8a84-33dea991ff85", 00:11:49.687 "is_configured": true, 00:11:49.687 "data_offset": 0, 00:11:49.687 "data_size": 65536 00:11:49.687 }, 00:11:49.687 { 00:11:49.687 "name": "BaseBdev3", 00:11:49.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.687 "is_configured": false, 00:11:49.687 "data_offset": 0, 00:11:49.687 "data_size": 0 00:11:49.687 } 00:11:49.687 ] 00:11:49.687 }' 00:11:49.687 16:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.687 16:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.947 16:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:49.947 16:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.947 16:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.947 [2024-12-06 16:27:31.757296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:49.947 [2024-12-06 16:27:31.757445] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:11:49.947 [2024-12-06 16:27:31.757463] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:49.947 [2024-12-06 16:27:31.757793] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:11:49.947 [2024-12-06 16:27:31.757948] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:11:49.947 [2024-12-06 16:27:31.757960] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:11:49.947 [2024-12-06 16:27:31.758191] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:49.947 BaseBdev3 00:11:49.947 16:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.947 16:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:49.947 16:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:49.947 16:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:49.947 16:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:49.947 16:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:49.947 16:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:49.947 16:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:49.947 16:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.947 16:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.947 16:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.947 16:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:49.947 16:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.947 16:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.208 [ 00:11:50.208 { 00:11:50.208 "name": "BaseBdev3", 00:11:50.208 "aliases": [ 00:11:50.208 "40278e89-6cb3-4596-b1a4-22b2f3193dac" 00:11:50.208 ], 00:11:50.208 "product_name": "Malloc disk", 00:11:50.208 "block_size": 512, 00:11:50.208 "num_blocks": 65536, 00:11:50.208 "uuid": "40278e89-6cb3-4596-b1a4-22b2f3193dac", 00:11:50.208 "assigned_rate_limits": { 00:11:50.208 "rw_ios_per_sec": 0, 00:11:50.208 "rw_mbytes_per_sec": 0, 00:11:50.208 "r_mbytes_per_sec": 0, 00:11:50.208 "w_mbytes_per_sec": 0 00:11:50.208 }, 00:11:50.208 "claimed": true, 00:11:50.208 "claim_type": "exclusive_write", 00:11:50.208 "zoned": false, 00:11:50.208 "supported_io_types": { 00:11:50.208 "read": true, 00:11:50.208 "write": true, 00:11:50.208 "unmap": true, 00:11:50.208 "flush": true, 00:11:50.208 "reset": true, 00:11:50.208 "nvme_admin": false, 00:11:50.208 "nvme_io": false, 00:11:50.208 "nvme_io_md": false, 00:11:50.208 "write_zeroes": true, 00:11:50.208 "zcopy": true, 00:11:50.208 "get_zone_info": false, 00:11:50.208 "zone_management": false, 00:11:50.208 "zone_append": false, 00:11:50.208 "compare": false, 00:11:50.208 "compare_and_write": false, 00:11:50.208 "abort": true, 00:11:50.208 "seek_hole": false, 00:11:50.208 "seek_data": false, 00:11:50.208 "copy": true, 00:11:50.208 "nvme_iov_md": false 00:11:50.208 }, 00:11:50.208 "memory_domains": [ 00:11:50.208 { 00:11:50.208 "dma_device_id": "system", 00:11:50.208 "dma_device_type": 1 00:11:50.208 }, 00:11:50.208 { 00:11:50.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.208 "dma_device_type": 2 00:11:50.208 } 00:11:50.208 ], 00:11:50.208 "driver_specific": {} 00:11:50.208 } 00:11:50.208 ] 00:11:50.208 16:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.208 16:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:50.208 16:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:50.208 16:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:50.208 16:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:50.208 16:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.208 16:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:50.208 16:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.208 16:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.208 16:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:50.208 16:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.208 16:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.208 16:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.208 16:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.208 16:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.208 16:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.208 16:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.208 16:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.208 16:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.208 16:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.208 "name": "Existed_Raid", 00:11:50.208 "uuid": "a2e2c6c3-3cf0-4c47-afdb-bf42b8dce29a", 00:11:50.208 "strip_size_kb": 0, 00:11:50.208 "state": "online", 00:11:50.208 "raid_level": "raid1", 00:11:50.208 "superblock": false, 00:11:50.208 "num_base_bdevs": 3, 00:11:50.208 "num_base_bdevs_discovered": 3, 00:11:50.208 "num_base_bdevs_operational": 3, 00:11:50.208 "base_bdevs_list": [ 00:11:50.208 { 00:11:50.208 "name": "BaseBdev1", 00:11:50.208 "uuid": "7124c123-a110-463d-a48e-38e47f3ceabe", 00:11:50.208 "is_configured": true, 00:11:50.208 "data_offset": 0, 00:11:50.208 "data_size": 65536 00:11:50.208 }, 00:11:50.208 { 00:11:50.208 "name": "BaseBdev2", 00:11:50.208 "uuid": "619afd81-162a-4220-8a84-33dea991ff85", 00:11:50.208 "is_configured": true, 00:11:50.208 "data_offset": 0, 00:11:50.208 "data_size": 65536 00:11:50.208 }, 00:11:50.208 { 00:11:50.208 "name": "BaseBdev3", 00:11:50.208 "uuid": "40278e89-6cb3-4596-b1a4-22b2f3193dac", 00:11:50.208 "is_configured": true, 00:11:50.208 "data_offset": 0, 00:11:50.208 "data_size": 65536 00:11:50.208 } 00:11:50.208 ] 00:11:50.208 }' 00:11:50.208 16:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.208 16:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.469 16:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:50.469 16:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:50.469 16:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:50.469 16:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:50.469 16:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:50.469 16:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:50.469 16:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:50.469 16:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.469 16:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.469 16:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:50.469 [2024-12-06 16:27:32.284855] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:50.469 16:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.728 16:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:50.728 "name": "Existed_Raid", 00:11:50.728 "aliases": [ 00:11:50.728 "a2e2c6c3-3cf0-4c47-afdb-bf42b8dce29a" 00:11:50.728 ], 00:11:50.728 "product_name": "Raid Volume", 00:11:50.728 "block_size": 512, 00:11:50.728 "num_blocks": 65536, 00:11:50.728 "uuid": "a2e2c6c3-3cf0-4c47-afdb-bf42b8dce29a", 00:11:50.728 "assigned_rate_limits": { 00:11:50.728 "rw_ios_per_sec": 0, 00:11:50.728 "rw_mbytes_per_sec": 0, 00:11:50.728 "r_mbytes_per_sec": 0, 00:11:50.728 "w_mbytes_per_sec": 0 00:11:50.728 }, 00:11:50.728 "claimed": false, 00:11:50.728 "zoned": false, 00:11:50.728 "supported_io_types": { 00:11:50.728 "read": true, 00:11:50.728 "write": true, 00:11:50.728 "unmap": false, 00:11:50.728 "flush": false, 00:11:50.728 "reset": true, 00:11:50.728 "nvme_admin": false, 00:11:50.728 "nvme_io": false, 00:11:50.728 "nvme_io_md": false, 00:11:50.728 "write_zeroes": true, 00:11:50.728 "zcopy": false, 00:11:50.728 "get_zone_info": false, 00:11:50.728 "zone_management": false, 00:11:50.728 "zone_append": false, 00:11:50.728 "compare": false, 00:11:50.728 "compare_and_write": false, 00:11:50.728 "abort": false, 00:11:50.728 "seek_hole": false, 00:11:50.728 "seek_data": false, 00:11:50.728 "copy": false, 00:11:50.728 "nvme_iov_md": false 00:11:50.728 }, 00:11:50.728 "memory_domains": [ 00:11:50.728 { 00:11:50.728 "dma_device_id": "system", 00:11:50.728 "dma_device_type": 1 00:11:50.728 }, 00:11:50.728 { 00:11:50.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.728 "dma_device_type": 2 00:11:50.728 }, 00:11:50.728 { 00:11:50.728 "dma_device_id": "system", 00:11:50.728 "dma_device_type": 1 00:11:50.728 }, 00:11:50.728 { 00:11:50.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.728 "dma_device_type": 2 00:11:50.728 }, 00:11:50.728 { 00:11:50.728 "dma_device_id": "system", 00:11:50.728 "dma_device_type": 1 00:11:50.728 }, 00:11:50.728 { 00:11:50.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.728 "dma_device_type": 2 00:11:50.728 } 00:11:50.728 ], 00:11:50.728 "driver_specific": { 00:11:50.728 "raid": { 00:11:50.728 "uuid": "a2e2c6c3-3cf0-4c47-afdb-bf42b8dce29a", 00:11:50.728 "strip_size_kb": 0, 00:11:50.728 "state": "online", 00:11:50.728 "raid_level": "raid1", 00:11:50.728 "superblock": false, 00:11:50.728 "num_base_bdevs": 3, 00:11:50.728 "num_base_bdevs_discovered": 3, 00:11:50.728 "num_base_bdevs_operational": 3, 00:11:50.728 "base_bdevs_list": [ 00:11:50.728 { 00:11:50.728 "name": "BaseBdev1", 00:11:50.728 "uuid": "7124c123-a110-463d-a48e-38e47f3ceabe", 00:11:50.728 "is_configured": true, 00:11:50.728 "data_offset": 0, 00:11:50.728 "data_size": 65536 00:11:50.728 }, 00:11:50.728 { 00:11:50.728 "name": "BaseBdev2", 00:11:50.728 "uuid": "619afd81-162a-4220-8a84-33dea991ff85", 00:11:50.728 "is_configured": true, 00:11:50.728 "data_offset": 0, 00:11:50.728 "data_size": 65536 00:11:50.728 }, 00:11:50.728 { 00:11:50.728 "name": "BaseBdev3", 00:11:50.728 "uuid": "40278e89-6cb3-4596-b1a4-22b2f3193dac", 00:11:50.728 "is_configured": true, 00:11:50.728 "data_offset": 0, 00:11:50.728 "data_size": 65536 00:11:50.728 } 00:11:50.728 ] 00:11:50.728 } 00:11:50.728 } 00:11:50.728 }' 00:11:50.728 16:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:50.728 16:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:50.728 BaseBdev2 00:11:50.728 BaseBdev3' 00:11:50.728 16:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.728 16:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:50.728 16:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.728 16:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:50.728 16:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.728 16:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.728 16:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.728 16:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.728 16:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.728 16:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.728 16:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.728 16:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.728 16:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:50.728 16:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.728 16:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.728 16:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.728 16:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.728 16:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.728 16:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.728 16:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:50.728 16:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.728 16:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.728 16:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.728 16:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.986 16:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.986 16:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.986 16:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:50.986 16:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.986 16:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.986 [2024-12-06 16:27:32.584111] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:50.986 16:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.986 16:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:50.986 16:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:50.986 16:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:50.986 16:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:50.987 16:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:50.987 16:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:50.987 16:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.987 16:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:50.987 16:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.987 16:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.987 16:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:50.987 16:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.987 16:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.987 16:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.987 16:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.987 16:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.987 16:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.987 16:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.987 16:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.987 16:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.987 16:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.987 "name": "Existed_Raid", 00:11:50.987 "uuid": "a2e2c6c3-3cf0-4c47-afdb-bf42b8dce29a", 00:11:50.987 "strip_size_kb": 0, 00:11:50.987 "state": "online", 00:11:50.987 "raid_level": "raid1", 00:11:50.987 "superblock": false, 00:11:50.987 "num_base_bdevs": 3, 00:11:50.987 "num_base_bdevs_discovered": 2, 00:11:50.987 "num_base_bdevs_operational": 2, 00:11:50.987 "base_bdevs_list": [ 00:11:50.987 { 00:11:50.987 "name": null, 00:11:50.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.987 "is_configured": false, 00:11:50.987 "data_offset": 0, 00:11:50.987 "data_size": 65536 00:11:50.987 }, 00:11:50.987 { 00:11:50.987 "name": "BaseBdev2", 00:11:50.987 "uuid": "619afd81-162a-4220-8a84-33dea991ff85", 00:11:50.987 "is_configured": true, 00:11:50.987 "data_offset": 0, 00:11:50.987 "data_size": 65536 00:11:50.987 }, 00:11:50.987 { 00:11:50.987 "name": "BaseBdev3", 00:11:50.987 "uuid": "40278e89-6cb3-4596-b1a4-22b2f3193dac", 00:11:50.987 "is_configured": true, 00:11:50.987 "data_offset": 0, 00:11:50.987 "data_size": 65536 00:11:50.987 } 00:11:50.987 ] 00:11:50.987 }' 00:11:50.987 16:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.987 16:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.245 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:51.245 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:51.245 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.245 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:51.245 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.245 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.245 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.505 [2024-12-06 16:27:33.106865] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.505 [2024-12-06 16:27:33.178552] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:51.505 [2024-12-06 16:27:33.178748] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:51.505 [2024-12-06 16:27:33.191132] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:51.505 [2024-12-06 16:27:33.191190] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:51.505 [2024-12-06 16:27:33.191224] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.505 BaseBdev2 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.505 [ 00:11:51.505 { 00:11:51.505 "name": "BaseBdev2", 00:11:51.505 "aliases": [ 00:11:51.505 "7db0f83f-8709-4491-9f31-3ea8cc62e69a" 00:11:51.505 ], 00:11:51.505 "product_name": "Malloc disk", 00:11:51.505 "block_size": 512, 00:11:51.505 "num_blocks": 65536, 00:11:51.505 "uuid": "7db0f83f-8709-4491-9f31-3ea8cc62e69a", 00:11:51.505 "assigned_rate_limits": { 00:11:51.505 "rw_ios_per_sec": 0, 00:11:51.505 "rw_mbytes_per_sec": 0, 00:11:51.505 "r_mbytes_per_sec": 0, 00:11:51.505 "w_mbytes_per_sec": 0 00:11:51.505 }, 00:11:51.505 "claimed": false, 00:11:51.505 "zoned": false, 00:11:51.505 "supported_io_types": { 00:11:51.505 "read": true, 00:11:51.505 "write": true, 00:11:51.505 "unmap": true, 00:11:51.505 "flush": true, 00:11:51.505 "reset": true, 00:11:51.505 "nvme_admin": false, 00:11:51.505 "nvme_io": false, 00:11:51.505 "nvme_io_md": false, 00:11:51.505 "write_zeroes": true, 00:11:51.505 "zcopy": true, 00:11:51.505 "get_zone_info": false, 00:11:51.505 "zone_management": false, 00:11:51.505 "zone_append": false, 00:11:51.505 "compare": false, 00:11:51.505 "compare_and_write": false, 00:11:51.505 "abort": true, 00:11:51.505 "seek_hole": false, 00:11:51.505 "seek_data": false, 00:11:51.505 "copy": true, 00:11:51.505 "nvme_iov_md": false 00:11:51.505 }, 00:11:51.505 "memory_domains": [ 00:11:51.505 { 00:11:51.505 "dma_device_id": "system", 00:11:51.505 "dma_device_type": 1 00:11:51.505 }, 00:11:51.505 { 00:11:51.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.505 "dma_device_type": 2 00:11:51.505 } 00:11:51.505 ], 00:11:51.505 "driver_specific": {} 00:11:51.505 } 00:11:51.505 ] 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.505 BaseBdev3 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.505 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.506 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:51.506 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.506 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.764 [ 00:11:51.764 { 00:11:51.764 "name": "BaseBdev3", 00:11:51.764 "aliases": [ 00:11:51.764 "389f9658-f319-4a4d-a433-015acd522a32" 00:11:51.764 ], 00:11:51.764 "product_name": "Malloc disk", 00:11:51.764 "block_size": 512, 00:11:51.764 "num_blocks": 65536, 00:11:51.764 "uuid": "389f9658-f319-4a4d-a433-015acd522a32", 00:11:51.764 "assigned_rate_limits": { 00:11:51.764 "rw_ios_per_sec": 0, 00:11:51.764 "rw_mbytes_per_sec": 0, 00:11:51.764 "r_mbytes_per_sec": 0, 00:11:51.764 "w_mbytes_per_sec": 0 00:11:51.764 }, 00:11:51.764 "claimed": false, 00:11:51.764 "zoned": false, 00:11:51.764 "supported_io_types": { 00:11:51.764 "read": true, 00:11:51.764 "write": true, 00:11:51.764 "unmap": true, 00:11:51.764 "flush": true, 00:11:51.764 "reset": true, 00:11:51.764 "nvme_admin": false, 00:11:51.764 "nvme_io": false, 00:11:51.764 "nvme_io_md": false, 00:11:51.764 "write_zeroes": true, 00:11:51.764 "zcopy": true, 00:11:51.764 "get_zone_info": false, 00:11:51.764 "zone_management": false, 00:11:51.764 "zone_append": false, 00:11:51.764 "compare": false, 00:11:51.764 "compare_and_write": false, 00:11:51.764 "abort": true, 00:11:51.764 "seek_hole": false, 00:11:51.764 "seek_data": false, 00:11:51.764 "copy": true, 00:11:51.764 "nvme_iov_md": false 00:11:51.764 }, 00:11:51.764 "memory_domains": [ 00:11:51.764 { 00:11:51.764 "dma_device_id": "system", 00:11:51.764 "dma_device_type": 1 00:11:51.764 }, 00:11:51.765 { 00:11:51.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.765 "dma_device_type": 2 00:11:51.765 } 00:11:51.765 ], 00:11:51.765 "driver_specific": {} 00:11:51.765 } 00:11:51.765 ] 00:11:51.765 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.765 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:51.765 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:51.765 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:51.765 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:51.765 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.765 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.765 [2024-12-06 16:27:33.374471] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:51.765 [2024-12-06 16:27:33.374633] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:51.765 [2024-12-06 16:27:33.374671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:51.765 [2024-12-06 16:27:33.376938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:51.765 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.765 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:51.765 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:51.765 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:51.765 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.765 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.765 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:51.765 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.765 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.765 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.765 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.765 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.765 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.765 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.765 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.765 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.765 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.765 "name": "Existed_Raid", 00:11:51.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.765 "strip_size_kb": 0, 00:11:51.765 "state": "configuring", 00:11:51.765 "raid_level": "raid1", 00:11:51.765 "superblock": false, 00:11:51.765 "num_base_bdevs": 3, 00:11:51.765 "num_base_bdevs_discovered": 2, 00:11:51.765 "num_base_bdevs_operational": 3, 00:11:51.765 "base_bdevs_list": [ 00:11:51.765 { 00:11:51.765 "name": "BaseBdev1", 00:11:51.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.765 "is_configured": false, 00:11:51.765 "data_offset": 0, 00:11:51.765 "data_size": 0 00:11:51.765 }, 00:11:51.765 { 00:11:51.765 "name": "BaseBdev2", 00:11:51.765 "uuid": "7db0f83f-8709-4491-9f31-3ea8cc62e69a", 00:11:51.765 "is_configured": true, 00:11:51.765 "data_offset": 0, 00:11:51.765 "data_size": 65536 00:11:51.765 }, 00:11:51.765 { 00:11:51.765 "name": "BaseBdev3", 00:11:51.765 "uuid": "389f9658-f319-4a4d-a433-015acd522a32", 00:11:51.765 "is_configured": true, 00:11:51.765 "data_offset": 0, 00:11:51.765 "data_size": 65536 00:11:51.765 } 00:11:51.765 ] 00:11:51.765 }' 00:11:51.765 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.765 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.024 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:52.024 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.024 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.024 [2024-12-06 16:27:33.808672] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:52.024 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.024 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:52.024 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.024 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:52.024 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:52.024 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:52.024 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:52.024 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.024 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.024 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.024 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.024 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.024 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.024 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.024 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.024 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.283 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.283 "name": "Existed_Raid", 00:11:52.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.283 "strip_size_kb": 0, 00:11:52.283 "state": "configuring", 00:11:52.283 "raid_level": "raid1", 00:11:52.283 "superblock": false, 00:11:52.283 "num_base_bdevs": 3, 00:11:52.283 "num_base_bdevs_discovered": 1, 00:11:52.283 "num_base_bdevs_operational": 3, 00:11:52.283 "base_bdevs_list": [ 00:11:52.283 { 00:11:52.283 "name": "BaseBdev1", 00:11:52.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.283 "is_configured": false, 00:11:52.283 "data_offset": 0, 00:11:52.283 "data_size": 0 00:11:52.283 }, 00:11:52.283 { 00:11:52.283 "name": null, 00:11:52.283 "uuid": "7db0f83f-8709-4491-9f31-3ea8cc62e69a", 00:11:52.283 "is_configured": false, 00:11:52.283 "data_offset": 0, 00:11:52.283 "data_size": 65536 00:11:52.283 }, 00:11:52.283 { 00:11:52.283 "name": "BaseBdev3", 00:11:52.283 "uuid": "389f9658-f319-4a4d-a433-015acd522a32", 00:11:52.283 "is_configured": true, 00:11:52.283 "data_offset": 0, 00:11:52.283 "data_size": 65536 00:11:52.283 } 00:11:52.283 ] 00:11:52.283 }' 00:11:52.283 16:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.283 16:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.541 16:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.541 16:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.541 16:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:52.541 16:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.541 16:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.541 16:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:52.541 16:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:52.541 16:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.541 16:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.801 [2024-12-06 16:27:34.379023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:52.801 BaseBdev1 00:11:52.801 16:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.801 16:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:52.801 16:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:52.801 16:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:52.801 16:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:52.801 16:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:52.801 16:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:52.801 16:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:52.801 16:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.801 16:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.801 16:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.801 16:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:52.801 16:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.801 16:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.801 [ 00:11:52.801 { 00:11:52.801 "name": "BaseBdev1", 00:11:52.801 "aliases": [ 00:11:52.801 "58dc0c31-e6df-419e-9a30-b8cfd43ea32b" 00:11:52.801 ], 00:11:52.801 "product_name": "Malloc disk", 00:11:52.801 "block_size": 512, 00:11:52.801 "num_blocks": 65536, 00:11:52.801 "uuid": "58dc0c31-e6df-419e-9a30-b8cfd43ea32b", 00:11:52.801 "assigned_rate_limits": { 00:11:52.801 "rw_ios_per_sec": 0, 00:11:52.801 "rw_mbytes_per_sec": 0, 00:11:52.801 "r_mbytes_per_sec": 0, 00:11:52.801 "w_mbytes_per_sec": 0 00:11:52.801 }, 00:11:52.801 "claimed": true, 00:11:52.801 "claim_type": "exclusive_write", 00:11:52.801 "zoned": false, 00:11:52.801 "supported_io_types": { 00:11:52.801 "read": true, 00:11:52.801 "write": true, 00:11:52.801 "unmap": true, 00:11:52.801 "flush": true, 00:11:52.801 "reset": true, 00:11:52.801 "nvme_admin": false, 00:11:52.801 "nvme_io": false, 00:11:52.801 "nvme_io_md": false, 00:11:52.801 "write_zeroes": true, 00:11:52.801 "zcopy": true, 00:11:52.801 "get_zone_info": false, 00:11:52.801 "zone_management": false, 00:11:52.801 "zone_append": false, 00:11:52.801 "compare": false, 00:11:52.801 "compare_and_write": false, 00:11:52.801 "abort": true, 00:11:52.801 "seek_hole": false, 00:11:52.801 "seek_data": false, 00:11:52.801 "copy": true, 00:11:52.801 "nvme_iov_md": false 00:11:52.801 }, 00:11:52.801 "memory_domains": [ 00:11:52.801 { 00:11:52.801 "dma_device_id": "system", 00:11:52.801 "dma_device_type": 1 00:11:52.801 }, 00:11:52.801 { 00:11:52.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.801 "dma_device_type": 2 00:11:52.801 } 00:11:52.801 ], 00:11:52.801 "driver_specific": {} 00:11:52.801 } 00:11:52.801 ] 00:11:52.801 16:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.801 16:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:52.801 16:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:52.801 16:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.801 16:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:52.801 16:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:52.801 16:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:52.801 16:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:52.801 16:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.801 16:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.801 16:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.801 16:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.801 16:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.801 16:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.801 16:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.801 16:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.801 16:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.801 16:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.801 "name": "Existed_Raid", 00:11:52.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.801 "strip_size_kb": 0, 00:11:52.801 "state": "configuring", 00:11:52.801 "raid_level": "raid1", 00:11:52.801 "superblock": false, 00:11:52.801 "num_base_bdevs": 3, 00:11:52.801 "num_base_bdevs_discovered": 2, 00:11:52.801 "num_base_bdevs_operational": 3, 00:11:52.801 "base_bdevs_list": [ 00:11:52.801 { 00:11:52.801 "name": "BaseBdev1", 00:11:52.801 "uuid": "58dc0c31-e6df-419e-9a30-b8cfd43ea32b", 00:11:52.801 "is_configured": true, 00:11:52.801 "data_offset": 0, 00:11:52.801 "data_size": 65536 00:11:52.801 }, 00:11:52.801 { 00:11:52.801 "name": null, 00:11:52.801 "uuid": "7db0f83f-8709-4491-9f31-3ea8cc62e69a", 00:11:52.801 "is_configured": false, 00:11:52.801 "data_offset": 0, 00:11:52.801 "data_size": 65536 00:11:52.801 }, 00:11:52.801 { 00:11:52.801 "name": "BaseBdev3", 00:11:52.801 "uuid": "389f9658-f319-4a4d-a433-015acd522a32", 00:11:52.801 "is_configured": true, 00:11:52.801 "data_offset": 0, 00:11:52.801 "data_size": 65536 00:11:52.801 } 00:11:52.801 ] 00:11:52.801 }' 00:11:52.801 16:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.801 16:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.061 16:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.061 16:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:53.061 16:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.061 16:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.061 16:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.320 16:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:53.320 16:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:53.320 16:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.320 16:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.320 [2024-12-06 16:27:34.910264] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:53.320 16:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.320 16:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:53.320 16:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:53.320 16:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:53.320 16:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.320 16:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.320 16:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:53.320 16:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.320 16:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.320 16:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.320 16:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.320 16:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.320 16:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.320 16:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.320 16:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.320 16:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.320 16:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.320 "name": "Existed_Raid", 00:11:53.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.320 "strip_size_kb": 0, 00:11:53.320 "state": "configuring", 00:11:53.320 "raid_level": "raid1", 00:11:53.320 "superblock": false, 00:11:53.320 "num_base_bdevs": 3, 00:11:53.321 "num_base_bdevs_discovered": 1, 00:11:53.321 "num_base_bdevs_operational": 3, 00:11:53.321 "base_bdevs_list": [ 00:11:53.321 { 00:11:53.321 "name": "BaseBdev1", 00:11:53.321 "uuid": "58dc0c31-e6df-419e-9a30-b8cfd43ea32b", 00:11:53.321 "is_configured": true, 00:11:53.321 "data_offset": 0, 00:11:53.321 "data_size": 65536 00:11:53.321 }, 00:11:53.321 { 00:11:53.321 "name": null, 00:11:53.321 "uuid": "7db0f83f-8709-4491-9f31-3ea8cc62e69a", 00:11:53.321 "is_configured": false, 00:11:53.321 "data_offset": 0, 00:11:53.321 "data_size": 65536 00:11:53.321 }, 00:11:53.321 { 00:11:53.321 "name": null, 00:11:53.321 "uuid": "389f9658-f319-4a4d-a433-015acd522a32", 00:11:53.321 "is_configured": false, 00:11:53.321 "data_offset": 0, 00:11:53.321 "data_size": 65536 00:11:53.321 } 00:11:53.321 ] 00:11:53.321 }' 00:11:53.321 16:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.321 16:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.588 16:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.588 16:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.588 16:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.588 16:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:53.588 16:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.846 16:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:53.846 16:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:53.846 16:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.846 16:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.846 [2024-12-06 16:27:35.433390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:53.846 16:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.846 16:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:53.846 16:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:53.846 16:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:53.846 16:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.846 16:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.846 16:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:53.846 16:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.846 16:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.846 16:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.846 16:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.846 16:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.846 16:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.846 16:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.846 16:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.846 16:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.846 16:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.846 "name": "Existed_Raid", 00:11:53.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.846 "strip_size_kb": 0, 00:11:53.846 "state": "configuring", 00:11:53.846 "raid_level": "raid1", 00:11:53.846 "superblock": false, 00:11:53.846 "num_base_bdevs": 3, 00:11:53.846 "num_base_bdevs_discovered": 2, 00:11:53.846 "num_base_bdevs_operational": 3, 00:11:53.846 "base_bdevs_list": [ 00:11:53.846 { 00:11:53.846 "name": "BaseBdev1", 00:11:53.846 "uuid": "58dc0c31-e6df-419e-9a30-b8cfd43ea32b", 00:11:53.846 "is_configured": true, 00:11:53.846 "data_offset": 0, 00:11:53.846 "data_size": 65536 00:11:53.846 }, 00:11:53.846 { 00:11:53.846 "name": null, 00:11:53.847 "uuid": "7db0f83f-8709-4491-9f31-3ea8cc62e69a", 00:11:53.847 "is_configured": false, 00:11:53.847 "data_offset": 0, 00:11:53.847 "data_size": 65536 00:11:53.847 }, 00:11:53.847 { 00:11:53.847 "name": "BaseBdev3", 00:11:53.847 "uuid": "389f9658-f319-4a4d-a433-015acd522a32", 00:11:53.847 "is_configured": true, 00:11:53.847 "data_offset": 0, 00:11:53.847 "data_size": 65536 00:11:53.847 } 00:11:53.847 ] 00:11:53.847 }' 00:11:53.847 16:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.847 16:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.113 16:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.113 16:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.113 16:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:54.113 16:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.113 16:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.382 16:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:54.382 16:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:54.382 16:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.382 16:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.382 [2024-12-06 16:27:35.968554] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:54.382 16:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.382 16:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:54.382 16:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:54.382 16:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:54.382 16:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:54.382 16:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:54.382 16:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:54.382 16:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.382 16:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.382 16:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.382 16:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.382 16:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.382 16:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.382 16:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.382 16:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.382 16:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.382 16:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.382 "name": "Existed_Raid", 00:11:54.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.382 "strip_size_kb": 0, 00:11:54.382 "state": "configuring", 00:11:54.382 "raid_level": "raid1", 00:11:54.382 "superblock": false, 00:11:54.382 "num_base_bdevs": 3, 00:11:54.382 "num_base_bdevs_discovered": 1, 00:11:54.382 "num_base_bdevs_operational": 3, 00:11:54.382 "base_bdevs_list": [ 00:11:54.382 { 00:11:54.382 "name": null, 00:11:54.382 "uuid": "58dc0c31-e6df-419e-9a30-b8cfd43ea32b", 00:11:54.382 "is_configured": false, 00:11:54.382 "data_offset": 0, 00:11:54.383 "data_size": 65536 00:11:54.383 }, 00:11:54.383 { 00:11:54.383 "name": null, 00:11:54.383 "uuid": "7db0f83f-8709-4491-9f31-3ea8cc62e69a", 00:11:54.383 "is_configured": false, 00:11:54.383 "data_offset": 0, 00:11:54.383 "data_size": 65536 00:11:54.383 }, 00:11:54.383 { 00:11:54.383 "name": "BaseBdev3", 00:11:54.383 "uuid": "389f9658-f319-4a4d-a433-015acd522a32", 00:11:54.383 "is_configured": true, 00:11:54.383 "data_offset": 0, 00:11:54.383 "data_size": 65536 00:11:54.383 } 00:11:54.383 ] 00:11:54.383 }' 00:11:54.383 16:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.383 16:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.641 16:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:54.641 16:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.641 16:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.641 16:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.900 16:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.900 16:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:54.900 16:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:54.900 16:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.900 16:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.900 [2024-12-06 16:27:36.514866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:54.900 16:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.900 16:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:54.900 16:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:54.900 16:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:54.900 16:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:54.900 16:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:54.900 16:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:54.900 16:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.900 16:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.900 16:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.900 16:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.900 16:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.901 16:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.901 16:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.901 16:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.901 16:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.901 16:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.901 "name": "Existed_Raid", 00:11:54.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.901 "strip_size_kb": 0, 00:11:54.901 "state": "configuring", 00:11:54.901 "raid_level": "raid1", 00:11:54.901 "superblock": false, 00:11:54.901 "num_base_bdevs": 3, 00:11:54.901 "num_base_bdevs_discovered": 2, 00:11:54.901 "num_base_bdevs_operational": 3, 00:11:54.901 "base_bdevs_list": [ 00:11:54.901 { 00:11:54.901 "name": null, 00:11:54.901 "uuid": "58dc0c31-e6df-419e-9a30-b8cfd43ea32b", 00:11:54.901 "is_configured": false, 00:11:54.901 "data_offset": 0, 00:11:54.901 "data_size": 65536 00:11:54.901 }, 00:11:54.901 { 00:11:54.901 "name": "BaseBdev2", 00:11:54.901 "uuid": "7db0f83f-8709-4491-9f31-3ea8cc62e69a", 00:11:54.901 "is_configured": true, 00:11:54.901 "data_offset": 0, 00:11:54.901 "data_size": 65536 00:11:54.901 }, 00:11:54.901 { 00:11:54.901 "name": "BaseBdev3", 00:11:54.901 "uuid": "389f9658-f319-4a4d-a433-015acd522a32", 00:11:54.901 "is_configured": true, 00:11:54.901 "data_offset": 0, 00:11:54.901 "data_size": 65536 00:11:54.901 } 00:11:54.901 ] 00:11:54.901 }' 00:11:54.901 16:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.901 16:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.468 16:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.468 16:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:55.468 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.468 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.468 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.468 16:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:55.468 16:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.468 16:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:55.468 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.468 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.468 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.468 16:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 58dc0c31-e6df-419e-9a30-b8cfd43ea32b 00:11:55.468 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.468 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.468 [2024-12-06 16:27:37.129486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:55.468 [2024-12-06 16:27:37.129646] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:11:55.468 [2024-12-06 16:27:37.129662] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:55.469 [2024-12-06 16:27:37.129978] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:55.469 [2024-12-06 16:27:37.130118] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:11:55.469 [2024-12-06 16:27:37.130134] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:11:55.469 [2024-12-06 16:27:37.130358] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:55.469 NewBaseBdev 00:11:55.469 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.469 16:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:55.469 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:55.469 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:55.469 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:55.469 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:55.469 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:55.469 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:55.469 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.469 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.469 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.469 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:55.469 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.469 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.469 [ 00:11:55.469 { 00:11:55.469 "name": "NewBaseBdev", 00:11:55.469 "aliases": [ 00:11:55.469 "58dc0c31-e6df-419e-9a30-b8cfd43ea32b" 00:11:55.469 ], 00:11:55.469 "product_name": "Malloc disk", 00:11:55.469 "block_size": 512, 00:11:55.469 "num_blocks": 65536, 00:11:55.469 "uuid": "58dc0c31-e6df-419e-9a30-b8cfd43ea32b", 00:11:55.469 "assigned_rate_limits": { 00:11:55.469 "rw_ios_per_sec": 0, 00:11:55.469 "rw_mbytes_per_sec": 0, 00:11:55.469 "r_mbytes_per_sec": 0, 00:11:55.469 "w_mbytes_per_sec": 0 00:11:55.469 }, 00:11:55.469 "claimed": true, 00:11:55.469 "claim_type": "exclusive_write", 00:11:55.469 "zoned": false, 00:11:55.469 "supported_io_types": { 00:11:55.469 "read": true, 00:11:55.469 "write": true, 00:11:55.469 "unmap": true, 00:11:55.469 "flush": true, 00:11:55.469 "reset": true, 00:11:55.469 "nvme_admin": false, 00:11:55.469 "nvme_io": false, 00:11:55.469 "nvme_io_md": false, 00:11:55.469 "write_zeroes": true, 00:11:55.469 "zcopy": true, 00:11:55.469 "get_zone_info": false, 00:11:55.469 "zone_management": false, 00:11:55.469 "zone_append": false, 00:11:55.469 "compare": false, 00:11:55.469 "compare_and_write": false, 00:11:55.469 "abort": true, 00:11:55.469 "seek_hole": false, 00:11:55.469 "seek_data": false, 00:11:55.469 "copy": true, 00:11:55.469 "nvme_iov_md": false 00:11:55.469 }, 00:11:55.469 "memory_domains": [ 00:11:55.469 { 00:11:55.469 "dma_device_id": "system", 00:11:55.469 "dma_device_type": 1 00:11:55.469 }, 00:11:55.469 { 00:11:55.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.469 "dma_device_type": 2 00:11:55.469 } 00:11:55.469 ], 00:11:55.469 "driver_specific": {} 00:11:55.469 } 00:11:55.469 ] 00:11:55.469 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.469 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:55.469 16:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:55.469 16:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.469 16:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:55.469 16:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:55.469 16:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:55.469 16:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:55.469 16:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.469 16:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.469 16:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.469 16:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.469 16:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.469 16:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.469 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.469 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.469 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.469 16:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.469 "name": "Existed_Raid", 00:11:55.469 "uuid": "4ec4be6f-e080-442f-8418-4f61dc95e52c", 00:11:55.469 "strip_size_kb": 0, 00:11:55.469 "state": "online", 00:11:55.469 "raid_level": "raid1", 00:11:55.469 "superblock": false, 00:11:55.469 "num_base_bdevs": 3, 00:11:55.469 "num_base_bdevs_discovered": 3, 00:11:55.469 "num_base_bdevs_operational": 3, 00:11:55.469 "base_bdevs_list": [ 00:11:55.469 { 00:11:55.469 "name": "NewBaseBdev", 00:11:55.469 "uuid": "58dc0c31-e6df-419e-9a30-b8cfd43ea32b", 00:11:55.469 "is_configured": true, 00:11:55.469 "data_offset": 0, 00:11:55.469 "data_size": 65536 00:11:55.469 }, 00:11:55.469 { 00:11:55.469 "name": "BaseBdev2", 00:11:55.469 "uuid": "7db0f83f-8709-4491-9f31-3ea8cc62e69a", 00:11:55.469 "is_configured": true, 00:11:55.469 "data_offset": 0, 00:11:55.469 "data_size": 65536 00:11:55.469 }, 00:11:55.469 { 00:11:55.469 "name": "BaseBdev3", 00:11:55.469 "uuid": "389f9658-f319-4a4d-a433-015acd522a32", 00:11:55.469 "is_configured": true, 00:11:55.469 "data_offset": 0, 00:11:55.469 "data_size": 65536 00:11:55.469 } 00:11:55.469 ] 00:11:55.469 }' 00:11:55.469 16:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.469 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.036 16:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:56.036 16:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:56.036 16:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:56.036 16:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:56.036 16:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:56.036 16:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:56.036 16:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:56.036 16:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:56.036 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.036 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.036 [2024-12-06 16:27:37.617113] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:56.036 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.036 16:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:56.036 "name": "Existed_Raid", 00:11:56.036 "aliases": [ 00:11:56.036 "4ec4be6f-e080-442f-8418-4f61dc95e52c" 00:11:56.036 ], 00:11:56.036 "product_name": "Raid Volume", 00:11:56.036 "block_size": 512, 00:11:56.036 "num_blocks": 65536, 00:11:56.036 "uuid": "4ec4be6f-e080-442f-8418-4f61dc95e52c", 00:11:56.036 "assigned_rate_limits": { 00:11:56.036 "rw_ios_per_sec": 0, 00:11:56.036 "rw_mbytes_per_sec": 0, 00:11:56.036 "r_mbytes_per_sec": 0, 00:11:56.036 "w_mbytes_per_sec": 0 00:11:56.036 }, 00:11:56.036 "claimed": false, 00:11:56.036 "zoned": false, 00:11:56.036 "supported_io_types": { 00:11:56.036 "read": true, 00:11:56.036 "write": true, 00:11:56.036 "unmap": false, 00:11:56.036 "flush": false, 00:11:56.036 "reset": true, 00:11:56.036 "nvme_admin": false, 00:11:56.036 "nvme_io": false, 00:11:56.036 "nvme_io_md": false, 00:11:56.036 "write_zeroes": true, 00:11:56.036 "zcopy": false, 00:11:56.036 "get_zone_info": false, 00:11:56.036 "zone_management": false, 00:11:56.036 "zone_append": false, 00:11:56.036 "compare": false, 00:11:56.036 "compare_and_write": false, 00:11:56.036 "abort": false, 00:11:56.036 "seek_hole": false, 00:11:56.036 "seek_data": false, 00:11:56.036 "copy": false, 00:11:56.036 "nvme_iov_md": false 00:11:56.036 }, 00:11:56.036 "memory_domains": [ 00:11:56.036 { 00:11:56.036 "dma_device_id": "system", 00:11:56.036 "dma_device_type": 1 00:11:56.036 }, 00:11:56.036 { 00:11:56.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.036 "dma_device_type": 2 00:11:56.036 }, 00:11:56.036 { 00:11:56.036 "dma_device_id": "system", 00:11:56.036 "dma_device_type": 1 00:11:56.036 }, 00:11:56.036 { 00:11:56.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.036 "dma_device_type": 2 00:11:56.036 }, 00:11:56.036 { 00:11:56.036 "dma_device_id": "system", 00:11:56.036 "dma_device_type": 1 00:11:56.036 }, 00:11:56.036 { 00:11:56.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.036 "dma_device_type": 2 00:11:56.036 } 00:11:56.036 ], 00:11:56.036 "driver_specific": { 00:11:56.036 "raid": { 00:11:56.036 "uuid": "4ec4be6f-e080-442f-8418-4f61dc95e52c", 00:11:56.036 "strip_size_kb": 0, 00:11:56.036 "state": "online", 00:11:56.036 "raid_level": "raid1", 00:11:56.036 "superblock": false, 00:11:56.036 "num_base_bdevs": 3, 00:11:56.036 "num_base_bdevs_discovered": 3, 00:11:56.036 "num_base_bdevs_operational": 3, 00:11:56.037 "base_bdevs_list": [ 00:11:56.037 { 00:11:56.037 "name": "NewBaseBdev", 00:11:56.037 "uuid": "58dc0c31-e6df-419e-9a30-b8cfd43ea32b", 00:11:56.037 "is_configured": true, 00:11:56.037 "data_offset": 0, 00:11:56.037 "data_size": 65536 00:11:56.037 }, 00:11:56.037 { 00:11:56.037 "name": "BaseBdev2", 00:11:56.037 "uuid": "7db0f83f-8709-4491-9f31-3ea8cc62e69a", 00:11:56.037 "is_configured": true, 00:11:56.037 "data_offset": 0, 00:11:56.037 "data_size": 65536 00:11:56.037 }, 00:11:56.037 { 00:11:56.037 "name": "BaseBdev3", 00:11:56.037 "uuid": "389f9658-f319-4a4d-a433-015acd522a32", 00:11:56.037 "is_configured": true, 00:11:56.037 "data_offset": 0, 00:11:56.037 "data_size": 65536 00:11:56.037 } 00:11:56.037 ] 00:11:56.037 } 00:11:56.037 } 00:11:56.037 }' 00:11:56.037 16:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:56.037 16:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:56.037 BaseBdev2 00:11:56.037 BaseBdev3' 00:11:56.037 16:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.037 16:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:56.037 16:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:56.037 16:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:56.037 16:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.037 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.037 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.037 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.037 16:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:56.037 16:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:56.037 16:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:56.037 16:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.037 16:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:56.037 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.037 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.037 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.037 16:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:56.037 16:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:56.037 16:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:56.037 16:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:56.037 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.037 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.037 16:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.037 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.296 16:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:56.296 16:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:56.296 16:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:56.296 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.296 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.296 [2024-12-06 16:27:37.896290] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:56.296 [2024-12-06 16:27:37.896330] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:56.296 [2024-12-06 16:27:37.896430] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:56.296 [2024-12-06 16:27:37.896735] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:56.296 [2024-12-06 16:27:37.896749] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:11:56.296 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.296 16:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 78838 00:11:56.296 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 78838 ']' 00:11:56.296 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 78838 00:11:56.296 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:56.296 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:56.296 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78838 00:11:56.296 killing process with pid 78838 00:11:56.297 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:56.297 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:56.297 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78838' 00:11:56.297 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 78838 00:11:56.297 [2024-12-06 16:27:37.932684] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:56.297 16:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 78838 00:11:56.297 [2024-12-06 16:27:37.966238] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:56.557 16:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:56.557 00:11:56.557 real 0m9.364s 00:11:56.557 user 0m15.973s 00:11:56.557 sys 0m1.949s 00:11:56.557 16:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.557 16:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.557 ************************************ 00:11:56.557 END TEST raid_state_function_test 00:11:56.557 ************************************ 00:11:56.557 16:27:38 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:11:56.557 16:27:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:56.557 16:27:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.557 16:27:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:56.557 ************************************ 00:11:56.557 START TEST raid_state_function_test_sb 00:11:56.557 ************************************ 00:11:56.557 16:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:11:56.557 16:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:56.557 16:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:56.557 16:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:56.557 16:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:56.557 16:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:56.557 16:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:56.557 16:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:56.557 16:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:56.557 16:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:56.557 16:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:56.557 16:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:56.557 16:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:56.557 16:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:56.557 16:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:56.557 16:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:56.557 16:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:56.557 16:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:56.557 16:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:56.557 16:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:56.557 16:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:56.557 16:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:56.557 16:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:56.557 16:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:56.557 16:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:56.557 Process raid pid: 79448 00:11:56.557 16:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:56.557 16:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=79448 00:11:56.557 16:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:56.557 16:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79448' 00:11:56.557 16:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 79448 00:11:56.557 16:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 79448 ']' 00:11:56.557 16:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.558 16:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:56.558 16:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.558 16:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:56.558 16:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.558 [2024-12-06 16:27:38.367314] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:11:56.558 [2024-12-06 16:27:38.367551] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:56.818 [2024-12-06 16:27:38.530146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.818 [2024-12-06 16:27:38.563303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.818 [2024-12-06 16:27:38.608480] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:56.818 [2024-12-06 16:27:38.608606] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:57.755 16:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:57.755 16:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:57.755 16:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:57.755 16:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.755 16:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.755 [2024-12-06 16:27:39.264579] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:57.755 [2024-12-06 16:27:39.264727] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:57.755 [2024-12-06 16:27:39.264768] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:57.755 [2024-12-06 16:27:39.264799] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:57.755 [2024-12-06 16:27:39.264822] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:57.755 [2024-12-06 16:27:39.264851] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:57.755 16:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.755 16:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:57.755 16:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.755 16:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.755 16:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.755 16:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.755 16:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:57.755 16:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.755 16:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.755 16:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.755 16:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.755 16:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.755 16:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.755 16:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.755 16:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.755 16:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.755 16:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.755 "name": "Existed_Raid", 00:11:57.755 "uuid": "b8712548-d50c-4c74-a5af-e9d8c8d9fb8b", 00:11:57.755 "strip_size_kb": 0, 00:11:57.755 "state": "configuring", 00:11:57.755 "raid_level": "raid1", 00:11:57.755 "superblock": true, 00:11:57.755 "num_base_bdevs": 3, 00:11:57.755 "num_base_bdevs_discovered": 0, 00:11:57.755 "num_base_bdevs_operational": 3, 00:11:57.755 "base_bdevs_list": [ 00:11:57.755 { 00:11:57.755 "name": "BaseBdev1", 00:11:57.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.755 "is_configured": false, 00:11:57.755 "data_offset": 0, 00:11:57.755 "data_size": 0 00:11:57.755 }, 00:11:57.755 { 00:11:57.755 "name": "BaseBdev2", 00:11:57.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.755 "is_configured": false, 00:11:57.755 "data_offset": 0, 00:11:57.755 "data_size": 0 00:11:57.755 }, 00:11:57.755 { 00:11:57.755 "name": "BaseBdev3", 00:11:57.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.755 "is_configured": false, 00:11:57.755 "data_offset": 0, 00:11:57.755 "data_size": 0 00:11:57.755 } 00:11:57.755 ] 00:11:57.755 }' 00:11:57.755 16:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.755 16:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.014 16:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:58.014 16:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.014 16:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.014 [2024-12-06 16:27:39.723727] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:58.014 [2024-12-06 16:27:39.723779] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:11:58.014 16:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.014 16:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:58.014 16:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.014 16:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.014 [2024-12-06 16:27:39.735710] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:58.014 [2024-12-06 16:27:39.735761] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:58.014 [2024-12-06 16:27:39.735772] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:58.014 [2024-12-06 16:27:39.735783] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:58.014 [2024-12-06 16:27:39.735791] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:58.014 [2024-12-06 16:27:39.735801] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:58.014 16:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.014 16:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:58.014 16:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.014 16:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.014 [2024-12-06 16:27:39.757326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:58.014 BaseBdev1 00:11:58.014 16:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.014 16:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:58.014 16:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:58.015 16:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:58.015 16:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:58.015 16:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:58.015 16:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:58.015 16:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:58.015 16:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.015 16:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.015 16:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.015 16:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:58.015 16:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.015 16:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.015 [ 00:11:58.015 { 00:11:58.015 "name": "BaseBdev1", 00:11:58.015 "aliases": [ 00:11:58.015 "1e82cd8e-3adc-4e13-9a7c-3793df5b2cb1" 00:11:58.015 ], 00:11:58.015 "product_name": "Malloc disk", 00:11:58.015 "block_size": 512, 00:11:58.015 "num_blocks": 65536, 00:11:58.015 "uuid": "1e82cd8e-3adc-4e13-9a7c-3793df5b2cb1", 00:11:58.015 "assigned_rate_limits": { 00:11:58.015 "rw_ios_per_sec": 0, 00:11:58.015 "rw_mbytes_per_sec": 0, 00:11:58.015 "r_mbytes_per_sec": 0, 00:11:58.015 "w_mbytes_per_sec": 0 00:11:58.015 }, 00:11:58.015 "claimed": true, 00:11:58.015 "claim_type": "exclusive_write", 00:11:58.015 "zoned": false, 00:11:58.015 "supported_io_types": { 00:11:58.015 "read": true, 00:11:58.015 "write": true, 00:11:58.015 "unmap": true, 00:11:58.015 "flush": true, 00:11:58.015 "reset": true, 00:11:58.015 "nvme_admin": false, 00:11:58.015 "nvme_io": false, 00:11:58.015 "nvme_io_md": false, 00:11:58.015 "write_zeroes": true, 00:11:58.015 "zcopy": true, 00:11:58.015 "get_zone_info": false, 00:11:58.015 "zone_management": false, 00:11:58.015 "zone_append": false, 00:11:58.015 "compare": false, 00:11:58.015 "compare_and_write": false, 00:11:58.015 "abort": true, 00:11:58.015 "seek_hole": false, 00:11:58.015 "seek_data": false, 00:11:58.015 "copy": true, 00:11:58.015 "nvme_iov_md": false 00:11:58.015 }, 00:11:58.015 "memory_domains": [ 00:11:58.015 { 00:11:58.015 "dma_device_id": "system", 00:11:58.015 "dma_device_type": 1 00:11:58.015 }, 00:11:58.015 { 00:11:58.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.015 "dma_device_type": 2 00:11:58.015 } 00:11:58.015 ], 00:11:58.015 "driver_specific": {} 00:11:58.015 } 00:11:58.015 ] 00:11:58.015 16:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.015 16:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:58.015 16:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:58.015 16:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.015 16:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:58.015 16:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:58.015 16:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:58.015 16:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:58.015 16:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.015 16:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.015 16:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.015 16:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.015 16:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.015 16:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.015 16:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.015 16:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.015 16:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.015 16:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.015 "name": "Existed_Raid", 00:11:58.015 "uuid": "5668f8f8-7877-45e1-832d-6cb73a6110ec", 00:11:58.015 "strip_size_kb": 0, 00:11:58.015 "state": "configuring", 00:11:58.015 "raid_level": "raid1", 00:11:58.015 "superblock": true, 00:11:58.015 "num_base_bdevs": 3, 00:11:58.015 "num_base_bdevs_discovered": 1, 00:11:58.015 "num_base_bdevs_operational": 3, 00:11:58.015 "base_bdevs_list": [ 00:11:58.015 { 00:11:58.015 "name": "BaseBdev1", 00:11:58.015 "uuid": "1e82cd8e-3adc-4e13-9a7c-3793df5b2cb1", 00:11:58.015 "is_configured": true, 00:11:58.015 "data_offset": 2048, 00:11:58.015 "data_size": 63488 00:11:58.015 }, 00:11:58.015 { 00:11:58.015 "name": "BaseBdev2", 00:11:58.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.015 "is_configured": false, 00:11:58.015 "data_offset": 0, 00:11:58.015 "data_size": 0 00:11:58.015 }, 00:11:58.015 { 00:11:58.015 "name": "BaseBdev3", 00:11:58.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.015 "is_configured": false, 00:11:58.015 "data_offset": 0, 00:11:58.015 "data_size": 0 00:11:58.015 } 00:11:58.015 ] 00:11:58.015 }' 00:11:58.015 16:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.015 16:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.634 16:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:58.634 16:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.634 16:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.634 [2024-12-06 16:27:40.256551] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:58.634 [2024-12-06 16:27:40.256688] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:11:58.634 16:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.634 16:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:58.634 16:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.634 16:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.634 [2024-12-06 16:27:40.268597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:58.634 [2024-12-06 16:27:40.270878] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:58.634 [2024-12-06 16:27:40.270935] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:58.634 [2024-12-06 16:27:40.270946] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:58.634 [2024-12-06 16:27:40.270958] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:58.634 16:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.634 16:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:58.634 16:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:58.634 16:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:58.634 16:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.634 16:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:58.634 16:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:58.635 16:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:58.635 16:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:58.635 16:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.635 16:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.635 16:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.635 16:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.635 16:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.635 16:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.635 16:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.635 16:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.635 16:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.635 16:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.635 "name": "Existed_Raid", 00:11:58.635 "uuid": "ec6db274-1eab-4bab-8903-e0febc8480b4", 00:11:58.635 "strip_size_kb": 0, 00:11:58.635 "state": "configuring", 00:11:58.635 "raid_level": "raid1", 00:11:58.635 "superblock": true, 00:11:58.635 "num_base_bdevs": 3, 00:11:58.635 "num_base_bdevs_discovered": 1, 00:11:58.635 "num_base_bdevs_operational": 3, 00:11:58.635 "base_bdevs_list": [ 00:11:58.635 { 00:11:58.635 "name": "BaseBdev1", 00:11:58.635 "uuid": "1e82cd8e-3adc-4e13-9a7c-3793df5b2cb1", 00:11:58.635 "is_configured": true, 00:11:58.635 "data_offset": 2048, 00:11:58.635 "data_size": 63488 00:11:58.635 }, 00:11:58.635 { 00:11:58.635 "name": "BaseBdev2", 00:11:58.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.635 "is_configured": false, 00:11:58.635 "data_offset": 0, 00:11:58.635 "data_size": 0 00:11:58.635 }, 00:11:58.635 { 00:11:58.635 "name": "BaseBdev3", 00:11:58.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.635 "is_configured": false, 00:11:58.635 "data_offset": 0, 00:11:58.635 "data_size": 0 00:11:58.635 } 00:11:58.635 ] 00:11:58.635 }' 00:11:58.635 16:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.635 16:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.216 16:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:59.216 16:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.216 16:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.216 [2024-12-06 16:27:40.779212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:59.216 BaseBdev2 00:11:59.216 16:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.216 16:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:59.216 16:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:59.216 16:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:59.216 16:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:59.216 16:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:59.216 16:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:59.216 16:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:59.216 16:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.216 16:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.216 16:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.216 16:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:59.216 16:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.216 16:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.216 [ 00:11:59.216 { 00:11:59.216 "name": "BaseBdev2", 00:11:59.216 "aliases": [ 00:11:59.216 "62d248b6-90c8-439b-8ac1-f05fecd4e4ee" 00:11:59.216 ], 00:11:59.216 "product_name": "Malloc disk", 00:11:59.216 "block_size": 512, 00:11:59.216 "num_blocks": 65536, 00:11:59.216 "uuid": "62d248b6-90c8-439b-8ac1-f05fecd4e4ee", 00:11:59.216 "assigned_rate_limits": { 00:11:59.216 "rw_ios_per_sec": 0, 00:11:59.216 "rw_mbytes_per_sec": 0, 00:11:59.216 "r_mbytes_per_sec": 0, 00:11:59.216 "w_mbytes_per_sec": 0 00:11:59.216 }, 00:11:59.216 "claimed": true, 00:11:59.216 "claim_type": "exclusive_write", 00:11:59.216 "zoned": false, 00:11:59.216 "supported_io_types": { 00:11:59.216 "read": true, 00:11:59.216 "write": true, 00:11:59.216 "unmap": true, 00:11:59.216 "flush": true, 00:11:59.216 "reset": true, 00:11:59.216 "nvme_admin": false, 00:11:59.216 "nvme_io": false, 00:11:59.216 "nvme_io_md": false, 00:11:59.216 "write_zeroes": true, 00:11:59.216 "zcopy": true, 00:11:59.216 "get_zone_info": false, 00:11:59.216 "zone_management": false, 00:11:59.216 "zone_append": false, 00:11:59.216 "compare": false, 00:11:59.216 "compare_and_write": false, 00:11:59.216 "abort": true, 00:11:59.216 "seek_hole": false, 00:11:59.216 "seek_data": false, 00:11:59.216 "copy": true, 00:11:59.216 "nvme_iov_md": false 00:11:59.216 }, 00:11:59.216 "memory_domains": [ 00:11:59.216 { 00:11:59.216 "dma_device_id": "system", 00:11:59.216 "dma_device_type": 1 00:11:59.216 }, 00:11:59.216 { 00:11:59.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.216 "dma_device_type": 2 00:11:59.216 } 00:11:59.216 ], 00:11:59.216 "driver_specific": {} 00:11:59.216 } 00:11:59.216 ] 00:11:59.216 16:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.216 16:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:59.216 16:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:59.216 16:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:59.216 16:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:59.216 16:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:59.216 16:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:59.216 16:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:59.216 16:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:59.216 16:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:59.216 16:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.216 16:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.216 16:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.216 16:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.216 16:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.216 16:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.216 16:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.216 16:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.216 16:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.217 16:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.217 "name": "Existed_Raid", 00:11:59.217 "uuid": "ec6db274-1eab-4bab-8903-e0febc8480b4", 00:11:59.217 "strip_size_kb": 0, 00:11:59.217 "state": "configuring", 00:11:59.217 "raid_level": "raid1", 00:11:59.217 "superblock": true, 00:11:59.217 "num_base_bdevs": 3, 00:11:59.217 "num_base_bdevs_discovered": 2, 00:11:59.217 "num_base_bdevs_operational": 3, 00:11:59.217 "base_bdevs_list": [ 00:11:59.217 { 00:11:59.217 "name": "BaseBdev1", 00:11:59.217 "uuid": "1e82cd8e-3adc-4e13-9a7c-3793df5b2cb1", 00:11:59.217 "is_configured": true, 00:11:59.217 "data_offset": 2048, 00:11:59.217 "data_size": 63488 00:11:59.217 }, 00:11:59.217 { 00:11:59.217 "name": "BaseBdev2", 00:11:59.217 "uuid": "62d248b6-90c8-439b-8ac1-f05fecd4e4ee", 00:11:59.217 "is_configured": true, 00:11:59.217 "data_offset": 2048, 00:11:59.217 "data_size": 63488 00:11:59.217 }, 00:11:59.217 { 00:11:59.217 "name": "BaseBdev3", 00:11:59.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.217 "is_configured": false, 00:11:59.217 "data_offset": 0, 00:11:59.217 "data_size": 0 00:11:59.217 } 00:11:59.217 ] 00:11:59.217 }' 00:11:59.217 16:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.217 16:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.474 16:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:59.474 16:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.474 16:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.474 [2024-12-06 16:27:41.304514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:59.474 [2024-12-06 16:27:41.304898] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:11:59.474 [2024-12-06 16:27:41.304940] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:59.474 [2024-12-06 16:27:41.305361] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:11:59.474 [2024-12-06 16:27:41.305584] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:11:59.474 [2024-12-06 16:27:41.305661] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raBaseBdev3 00:11:59.474 id_bdev 0x617000006980 00:11:59.474 [2024-12-06 16:27:41.305902] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:59.474 16:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.475 16:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:59.475 16:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:59.475 16:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:59.475 16:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:59.475 16:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:59.475 16:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:59.475 16:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:59.475 16:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.475 16:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.733 16:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.733 16:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:59.733 16:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.733 16:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.733 [ 00:11:59.733 { 00:11:59.733 "name": "BaseBdev3", 00:11:59.733 "aliases": [ 00:11:59.733 "a8d72814-04b1-47b6-a2ba-371b48e88ade" 00:11:59.733 ], 00:11:59.734 "product_name": "Malloc disk", 00:11:59.734 "block_size": 512, 00:11:59.734 "num_blocks": 65536, 00:11:59.734 "uuid": "a8d72814-04b1-47b6-a2ba-371b48e88ade", 00:11:59.734 "assigned_rate_limits": { 00:11:59.734 "rw_ios_per_sec": 0, 00:11:59.734 "rw_mbytes_per_sec": 0, 00:11:59.734 "r_mbytes_per_sec": 0, 00:11:59.734 "w_mbytes_per_sec": 0 00:11:59.734 }, 00:11:59.734 "claimed": true, 00:11:59.734 "claim_type": "exclusive_write", 00:11:59.734 "zoned": false, 00:11:59.734 "supported_io_types": { 00:11:59.734 "read": true, 00:11:59.734 "write": true, 00:11:59.734 "unmap": true, 00:11:59.734 "flush": true, 00:11:59.734 "reset": true, 00:11:59.734 "nvme_admin": false, 00:11:59.734 "nvme_io": false, 00:11:59.734 "nvme_io_md": false, 00:11:59.734 "write_zeroes": true, 00:11:59.734 "zcopy": true, 00:11:59.734 "get_zone_info": false, 00:11:59.734 "zone_management": false, 00:11:59.734 "zone_append": false, 00:11:59.734 "compare": false, 00:11:59.734 "compare_and_write": false, 00:11:59.734 "abort": true, 00:11:59.734 "seek_hole": false, 00:11:59.734 "seek_data": false, 00:11:59.734 "copy": true, 00:11:59.734 "nvme_iov_md": false 00:11:59.734 }, 00:11:59.734 "memory_domains": [ 00:11:59.734 { 00:11:59.734 "dma_device_id": "system", 00:11:59.734 "dma_device_type": 1 00:11:59.734 }, 00:11:59.734 { 00:11:59.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.734 "dma_device_type": 2 00:11:59.734 } 00:11:59.734 ], 00:11:59.734 "driver_specific": {} 00:11:59.734 } 00:11:59.734 ] 00:11:59.734 16:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.734 16:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:59.734 16:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:59.734 16:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:59.734 16:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:59.734 16:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:59.734 16:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:59.734 16:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:59.734 16:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:59.734 16:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:59.734 16:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.734 16:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.734 16:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.734 16:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.734 16:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.734 16:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.734 16:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.734 16:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.734 16:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.734 16:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.734 "name": "Existed_Raid", 00:11:59.734 "uuid": "ec6db274-1eab-4bab-8903-e0febc8480b4", 00:11:59.734 "strip_size_kb": 0, 00:11:59.734 "state": "online", 00:11:59.734 "raid_level": "raid1", 00:11:59.734 "superblock": true, 00:11:59.734 "num_base_bdevs": 3, 00:11:59.734 "num_base_bdevs_discovered": 3, 00:11:59.734 "num_base_bdevs_operational": 3, 00:11:59.734 "base_bdevs_list": [ 00:11:59.734 { 00:11:59.734 "name": "BaseBdev1", 00:11:59.734 "uuid": "1e82cd8e-3adc-4e13-9a7c-3793df5b2cb1", 00:11:59.734 "is_configured": true, 00:11:59.734 "data_offset": 2048, 00:11:59.734 "data_size": 63488 00:11:59.734 }, 00:11:59.734 { 00:11:59.734 "name": "BaseBdev2", 00:11:59.734 "uuid": "62d248b6-90c8-439b-8ac1-f05fecd4e4ee", 00:11:59.734 "is_configured": true, 00:11:59.734 "data_offset": 2048, 00:11:59.734 "data_size": 63488 00:11:59.734 }, 00:11:59.734 { 00:11:59.734 "name": "BaseBdev3", 00:11:59.734 "uuid": "a8d72814-04b1-47b6-a2ba-371b48e88ade", 00:11:59.734 "is_configured": true, 00:11:59.734 "data_offset": 2048, 00:11:59.734 "data_size": 63488 00:11:59.734 } 00:11:59.734 ] 00:11:59.734 }' 00:11:59.734 16:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.734 16:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.993 16:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:59.993 16:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:59.993 16:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:59.993 16:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:59.993 16:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:59.993 16:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:59.993 16:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:59.993 16:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.993 16:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.993 16:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:59.993 [2024-12-06 16:27:41.752195] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:59.993 16:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.993 16:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:59.993 "name": "Existed_Raid", 00:11:59.993 "aliases": [ 00:11:59.993 "ec6db274-1eab-4bab-8903-e0febc8480b4" 00:11:59.993 ], 00:11:59.993 "product_name": "Raid Volume", 00:11:59.993 "block_size": 512, 00:11:59.993 "num_blocks": 63488, 00:11:59.993 "uuid": "ec6db274-1eab-4bab-8903-e0febc8480b4", 00:11:59.993 "assigned_rate_limits": { 00:11:59.993 "rw_ios_per_sec": 0, 00:11:59.993 "rw_mbytes_per_sec": 0, 00:11:59.993 "r_mbytes_per_sec": 0, 00:11:59.993 "w_mbytes_per_sec": 0 00:11:59.993 }, 00:11:59.993 "claimed": false, 00:11:59.993 "zoned": false, 00:11:59.993 "supported_io_types": { 00:11:59.993 "read": true, 00:11:59.993 "write": true, 00:11:59.993 "unmap": false, 00:11:59.993 "flush": false, 00:11:59.993 "reset": true, 00:11:59.993 "nvme_admin": false, 00:11:59.993 "nvme_io": false, 00:11:59.993 "nvme_io_md": false, 00:11:59.993 "write_zeroes": true, 00:11:59.993 "zcopy": false, 00:11:59.993 "get_zone_info": false, 00:11:59.993 "zone_management": false, 00:11:59.993 "zone_append": false, 00:11:59.993 "compare": false, 00:11:59.993 "compare_and_write": false, 00:11:59.993 "abort": false, 00:11:59.993 "seek_hole": false, 00:11:59.993 "seek_data": false, 00:11:59.993 "copy": false, 00:11:59.993 "nvme_iov_md": false 00:11:59.993 }, 00:11:59.993 "memory_domains": [ 00:11:59.993 { 00:11:59.993 "dma_device_id": "system", 00:11:59.993 "dma_device_type": 1 00:11:59.993 }, 00:11:59.993 { 00:11:59.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.993 "dma_device_type": 2 00:11:59.993 }, 00:11:59.993 { 00:11:59.993 "dma_device_id": "system", 00:11:59.993 "dma_device_type": 1 00:11:59.993 }, 00:11:59.993 { 00:11:59.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.993 "dma_device_type": 2 00:11:59.993 }, 00:11:59.993 { 00:11:59.993 "dma_device_id": "system", 00:11:59.993 "dma_device_type": 1 00:11:59.993 }, 00:11:59.993 { 00:11:59.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.993 "dma_device_type": 2 00:11:59.993 } 00:11:59.993 ], 00:11:59.993 "driver_specific": { 00:11:59.993 "raid": { 00:11:59.993 "uuid": "ec6db274-1eab-4bab-8903-e0febc8480b4", 00:11:59.993 "strip_size_kb": 0, 00:11:59.993 "state": "online", 00:11:59.993 "raid_level": "raid1", 00:11:59.993 "superblock": true, 00:11:59.993 "num_base_bdevs": 3, 00:11:59.993 "num_base_bdevs_discovered": 3, 00:11:59.993 "num_base_bdevs_operational": 3, 00:11:59.993 "base_bdevs_list": [ 00:11:59.993 { 00:11:59.993 "name": "BaseBdev1", 00:11:59.993 "uuid": "1e82cd8e-3adc-4e13-9a7c-3793df5b2cb1", 00:11:59.993 "is_configured": true, 00:11:59.993 "data_offset": 2048, 00:11:59.993 "data_size": 63488 00:11:59.993 }, 00:11:59.993 { 00:11:59.993 "name": "BaseBdev2", 00:11:59.993 "uuid": "62d248b6-90c8-439b-8ac1-f05fecd4e4ee", 00:11:59.993 "is_configured": true, 00:11:59.993 "data_offset": 2048, 00:11:59.993 "data_size": 63488 00:11:59.993 }, 00:11:59.993 { 00:11:59.993 "name": "BaseBdev3", 00:11:59.993 "uuid": "a8d72814-04b1-47b6-a2ba-371b48e88ade", 00:11:59.993 "is_configured": true, 00:11:59.993 "data_offset": 2048, 00:11:59.993 "data_size": 63488 00:11:59.993 } 00:11:59.993 ] 00:11:59.993 } 00:11:59.993 } 00:11:59.993 }' 00:11:59.993 16:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:00.253 16:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:00.253 BaseBdev2 00:12:00.253 BaseBdev3' 00:12:00.253 16:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.253 16:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:00.253 16:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.253 16:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:00.253 16:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.253 16:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.253 16:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.253 16:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.253 16:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.253 16:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.253 16:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.253 16:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:00.253 16:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.253 16:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.253 16:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.253 16:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.253 16:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.253 16:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.253 16:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.253 16:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:00.253 16:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.253 16:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.253 16:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.253 16:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.253 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.253 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.253 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:00.253 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.253 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.253 [2024-12-06 16:27:42.023505] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:00.253 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.253 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:00.253 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:00.253 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:00.253 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:12:00.253 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:00.253 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:12:00.253 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:00.253 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:00.253 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:00.253 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:00.253 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:00.253 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.253 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.253 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.253 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.253 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.253 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:00.253 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.253 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.253 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.253 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.253 "name": "Existed_Raid", 00:12:00.253 "uuid": "ec6db274-1eab-4bab-8903-e0febc8480b4", 00:12:00.253 "strip_size_kb": 0, 00:12:00.253 "state": "online", 00:12:00.253 "raid_level": "raid1", 00:12:00.254 "superblock": true, 00:12:00.254 "num_base_bdevs": 3, 00:12:00.254 "num_base_bdevs_discovered": 2, 00:12:00.254 "num_base_bdevs_operational": 2, 00:12:00.254 "base_bdevs_list": [ 00:12:00.254 { 00:12:00.254 "name": null, 00:12:00.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.254 "is_configured": false, 00:12:00.254 "data_offset": 0, 00:12:00.254 "data_size": 63488 00:12:00.254 }, 00:12:00.254 { 00:12:00.254 "name": "BaseBdev2", 00:12:00.254 "uuid": "62d248b6-90c8-439b-8ac1-f05fecd4e4ee", 00:12:00.254 "is_configured": true, 00:12:00.254 "data_offset": 2048, 00:12:00.254 "data_size": 63488 00:12:00.254 }, 00:12:00.254 { 00:12:00.254 "name": "BaseBdev3", 00:12:00.254 "uuid": "a8d72814-04b1-47b6-a2ba-371b48e88ade", 00:12:00.254 "is_configured": true, 00:12:00.254 "data_offset": 2048, 00:12:00.254 "data_size": 63488 00:12:00.254 } 00:12:00.254 ] 00:12:00.254 }' 00:12:00.254 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.254 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.822 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:00.822 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:00.822 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:00.822 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.822 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.822 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.822 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.822 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:00.822 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:00.822 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:00.822 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.822 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.822 [2024-12-06 16:27:42.530847] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:00.822 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.822 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:00.822 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:00.822 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.822 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:00.822 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.822 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.822 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.822 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:00.822 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:00.822 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:00.822 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.822 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.822 [2024-12-06 16:27:42.598655] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:00.822 [2024-12-06 16:27:42.598842] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:00.822 [2024-12-06 16:27:42.611254] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:00.822 [2024-12-06 16:27:42.611397] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:00.822 [2024-12-06 16:27:42.611452] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:12:00.822 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.822 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:00.822 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:00.822 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.822 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.822 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.822 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:00.822 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.082 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:01.082 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:01.082 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:01.082 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:01.082 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:01.082 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:01.082 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.082 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.082 BaseBdev2 00:12:01.082 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.082 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:01.082 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:01.082 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:01.082 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:01.082 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:01.082 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:01.082 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:01.082 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.082 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.082 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.082 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:01.082 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.082 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.082 [ 00:12:01.082 { 00:12:01.082 "name": "BaseBdev2", 00:12:01.082 "aliases": [ 00:12:01.082 "bfa09996-c352-4ac8-8bd1-d60f06e0c233" 00:12:01.082 ], 00:12:01.082 "product_name": "Malloc disk", 00:12:01.082 "block_size": 512, 00:12:01.082 "num_blocks": 65536, 00:12:01.082 "uuid": "bfa09996-c352-4ac8-8bd1-d60f06e0c233", 00:12:01.082 "assigned_rate_limits": { 00:12:01.082 "rw_ios_per_sec": 0, 00:12:01.082 "rw_mbytes_per_sec": 0, 00:12:01.082 "r_mbytes_per_sec": 0, 00:12:01.082 "w_mbytes_per_sec": 0 00:12:01.082 }, 00:12:01.082 "claimed": false, 00:12:01.082 "zoned": false, 00:12:01.082 "supported_io_types": { 00:12:01.082 "read": true, 00:12:01.082 "write": true, 00:12:01.082 "unmap": true, 00:12:01.082 "flush": true, 00:12:01.082 "reset": true, 00:12:01.082 "nvme_admin": false, 00:12:01.082 "nvme_io": false, 00:12:01.082 "nvme_io_md": false, 00:12:01.082 "write_zeroes": true, 00:12:01.082 "zcopy": true, 00:12:01.082 "get_zone_info": false, 00:12:01.082 "zone_management": false, 00:12:01.082 "zone_append": false, 00:12:01.082 "compare": false, 00:12:01.082 "compare_and_write": false, 00:12:01.082 "abort": true, 00:12:01.082 "seek_hole": false, 00:12:01.082 "seek_data": false, 00:12:01.082 "copy": true, 00:12:01.082 "nvme_iov_md": false 00:12:01.082 }, 00:12:01.082 "memory_domains": [ 00:12:01.082 { 00:12:01.082 "dma_device_id": "system", 00:12:01.082 "dma_device_type": 1 00:12:01.082 }, 00:12:01.082 { 00:12:01.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.082 "dma_device_type": 2 00:12:01.082 } 00:12:01.082 ], 00:12:01.082 "driver_specific": {} 00:12:01.082 } 00:12:01.082 ] 00:12:01.082 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.082 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:01.082 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:01.082 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:01.082 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:01.082 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.082 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.082 BaseBdev3 00:12:01.083 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.083 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:01.083 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:01.083 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:01.083 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:01.083 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:01.083 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:01.083 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:01.083 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.083 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.083 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.083 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:01.083 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.083 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.083 [ 00:12:01.083 { 00:12:01.083 "name": "BaseBdev3", 00:12:01.083 "aliases": [ 00:12:01.083 "f4865e27-1cc9-4079-b04d-cfb8b188e0d1" 00:12:01.083 ], 00:12:01.083 "product_name": "Malloc disk", 00:12:01.083 "block_size": 512, 00:12:01.083 "num_blocks": 65536, 00:12:01.083 "uuid": "f4865e27-1cc9-4079-b04d-cfb8b188e0d1", 00:12:01.083 "assigned_rate_limits": { 00:12:01.083 "rw_ios_per_sec": 0, 00:12:01.083 "rw_mbytes_per_sec": 0, 00:12:01.083 "r_mbytes_per_sec": 0, 00:12:01.083 "w_mbytes_per_sec": 0 00:12:01.083 }, 00:12:01.083 "claimed": false, 00:12:01.083 "zoned": false, 00:12:01.083 "supported_io_types": { 00:12:01.083 "read": true, 00:12:01.083 "write": true, 00:12:01.083 "unmap": true, 00:12:01.083 "flush": true, 00:12:01.083 "reset": true, 00:12:01.083 "nvme_admin": false, 00:12:01.083 "nvme_io": false, 00:12:01.083 "nvme_io_md": false, 00:12:01.083 "write_zeroes": true, 00:12:01.083 "zcopy": true, 00:12:01.083 "get_zone_info": false, 00:12:01.083 "zone_management": false, 00:12:01.083 "zone_append": false, 00:12:01.083 "compare": false, 00:12:01.083 "compare_and_write": false, 00:12:01.083 "abort": true, 00:12:01.083 "seek_hole": false, 00:12:01.083 "seek_data": false, 00:12:01.083 "copy": true, 00:12:01.083 "nvme_iov_md": false 00:12:01.083 }, 00:12:01.083 "memory_domains": [ 00:12:01.083 { 00:12:01.083 "dma_device_id": "system", 00:12:01.083 "dma_device_type": 1 00:12:01.083 }, 00:12:01.083 { 00:12:01.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.083 "dma_device_type": 2 00:12:01.083 } 00:12:01.083 ], 00:12:01.083 "driver_specific": {} 00:12:01.083 } 00:12:01.083 ] 00:12:01.083 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.083 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:01.083 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:01.083 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:01.083 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:01.083 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.083 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.083 [2024-12-06 16:27:42.773780] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:01.083 [2024-12-06 16:27:42.773904] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:01.083 [2024-12-06 16:27:42.773979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:01.083 [2024-12-06 16:27:42.776281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:01.083 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.083 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:01.083 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.083 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.083 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.083 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.083 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:01.083 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.083 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.083 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.083 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.083 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.083 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.083 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.083 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.083 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.083 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.083 "name": "Existed_Raid", 00:12:01.083 "uuid": "87d6a7f2-a74a-4bbf-89b4-c9d2fb74e9c1", 00:12:01.083 "strip_size_kb": 0, 00:12:01.083 "state": "configuring", 00:12:01.083 "raid_level": "raid1", 00:12:01.083 "superblock": true, 00:12:01.083 "num_base_bdevs": 3, 00:12:01.083 "num_base_bdevs_discovered": 2, 00:12:01.083 "num_base_bdevs_operational": 3, 00:12:01.083 "base_bdevs_list": [ 00:12:01.083 { 00:12:01.083 "name": "BaseBdev1", 00:12:01.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.083 "is_configured": false, 00:12:01.083 "data_offset": 0, 00:12:01.083 "data_size": 0 00:12:01.083 }, 00:12:01.083 { 00:12:01.083 "name": "BaseBdev2", 00:12:01.083 "uuid": "bfa09996-c352-4ac8-8bd1-d60f06e0c233", 00:12:01.083 "is_configured": true, 00:12:01.083 "data_offset": 2048, 00:12:01.083 "data_size": 63488 00:12:01.083 }, 00:12:01.083 { 00:12:01.083 "name": "BaseBdev3", 00:12:01.083 "uuid": "f4865e27-1cc9-4079-b04d-cfb8b188e0d1", 00:12:01.083 "is_configured": true, 00:12:01.083 "data_offset": 2048, 00:12:01.083 "data_size": 63488 00:12:01.083 } 00:12:01.083 ] 00:12:01.083 }' 00:12:01.083 16:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.083 16:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.652 16:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:01.652 16:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.652 16:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.652 [2024-12-06 16:27:43.256923] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:01.652 16:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.652 16:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:01.652 16:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.652 16:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.652 16:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.652 16:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.652 16:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:01.652 16:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.652 16:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.652 16:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.652 16:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.652 16:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.652 16:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.652 16:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.652 16:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.652 16:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.652 16:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.652 "name": "Existed_Raid", 00:12:01.652 "uuid": "87d6a7f2-a74a-4bbf-89b4-c9d2fb74e9c1", 00:12:01.652 "strip_size_kb": 0, 00:12:01.652 "state": "configuring", 00:12:01.652 "raid_level": "raid1", 00:12:01.652 "superblock": true, 00:12:01.652 "num_base_bdevs": 3, 00:12:01.652 "num_base_bdevs_discovered": 1, 00:12:01.652 "num_base_bdevs_operational": 3, 00:12:01.652 "base_bdevs_list": [ 00:12:01.652 { 00:12:01.652 "name": "BaseBdev1", 00:12:01.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.652 "is_configured": false, 00:12:01.652 "data_offset": 0, 00:12:01.652 "data_size": 0 00:12:01.652 }, 00:12:01.652 { 00:12:01.652 "name": null, 00:12:01.652 "uuid": "bfa09996-c352-4ac8-8bd1-d60f06e0c233", 00:12:01.652 "is_configured": false, 00:12:01.652 "data_offset": 0, 00:12:01.652 "data_size": 63488 00:12:01.652 }, 00:12:01.652 { 00:12:01.652 "name": "BaseBdev3", 00:12:01.652 "uuid": "f4865e27-1cc9-4079-b04d-cfb8b188e0d1", 00:12:01.652 "is_configured": true, 00:12:01.652 "data_offset": 2048, 00:12:01.652 "data_size": 63488 00:12:01.652 } 00:12:01.652 ] 00:12:01.652 }' 00:12:01.652 16:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.652 16:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.911 16:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.911 16:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:01.911 16:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.911 16:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.911 16:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.911 16:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:01.911 16:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:01.911 16:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.911 16:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.911 [2024-12-06 16:27:43.747610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:01.911 BaseBdev1 00:12:02.169 16:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.169 16:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:02.169 16:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:02.169 16:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:02.169 16:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:02.169 16:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:02.169 16:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:02.169 16:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:02.169 16:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.169 16:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.169 16:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.169 16:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:02.169 16:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.169 16:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.169 [ 00:12:02.169 { 00:12:02.169 "name": "BaseBdev1", 00:12:02.169 "aliases": [ 00:12:02.169 "6c157b28-2b4b-4184-b137-e3e6ad824644" 00:12:02.169 ], 00:12:02.169 "product_name": "Malloc disk", 00:12:02.169 "block_size": 512, 00:12:02.169 "num_blocks": 65536, 00:12:02.169 "uuid": "6c157b28-2b4b-4184-b137-e3e6ad824644", 00:12:02.169 "assigned_rate_limits": { 00:12:02.169 "rw_ios_per_sec": 0, 00:12:02.169 "rw_mbytes_per_sec": 0, 00:12:02.169 "r_mbytes_per_sec": 0, 00:12:02.169 "w_mbytes_per_sec": 0 00:12:02.169 }, 00:12:02.169 "claimed": true, 00:12:02.169 "claim_type": "exclusive_write", 00:12:02.169 "zoned": false, 00:12:02.169 "supported_io_types": { 00:12:02.169 "read": true, 00:12:02.169 "write": true, 00:12:02.169 "unmap": true, 00:12:02.169 "flush": true, 00:12:02.169 "reset": true, 00:12:02.169 "nvme_admin": false, 00:12:02.169 "nvme_io": false, 00:12:02.169 "nvme_io_md": false, 00:12:02.169 "write_zeroes": true, 00:12:02.169 "zcopy": true, 00:12:02.169 "get_zone_info": false, 00:12:02.170 "zone_management": false, 00:12:02.170 "zone_append": false, 00:12:02.170 "compare": false, 00:12:02.170 "compare_and_write": false, 00:12:02.170 "abort": true, 00:12:02.170 "seek_hole": false, 00:12:02.170 "seek_data": false, 00:12:02.170 "copy": true, 00:12:02.170 "nvme_iov_md": false 00:12:02.170 }, 00:12:02.170 "memory_domains": [ 00:12:02.170 { 00:12:02.170 "dma_device_id": "system", 00:12:02.170 "dma_device_type": 1 00:12:02.170 }, 00:12:02.170 { 00:12:02.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.170 "dma_device_type": 2 00:12:02.170 } 00:12:02.170 ], 00:12:02.170 "driver_specific": {} 00:12:02.170 } 00:12:02.170 ] 00:12:02.170 16:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.170 16:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:02.170 16:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:02.170 16:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.170 16:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.170 16:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.170 16:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.170 16:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:02.170 16:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.170 16:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.170 16:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.170 16:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.170 16:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.170 16:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.170 16:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.170 16:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.170 16:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.170 16:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.170 "name": "Existed_Raid", 00:12:02.170 "uuid": "87d6a7f2-a74a-4bbf-89b4-c9d2fb74e9c1", 00:12:02.170 "strip_size_kb": 0, 00:12:02.170 "state": "configuring", 00:12:02.170 "raid_level": "raid1", 00:12:02.170 "superblock": true, 00:12:02.170 "num_base_bdevs": 3, 00:12:02.170 "num_base_bdevs_discovered": 2, 00:12:02.170 "num_base_bdevs_operational": 3, 00:12:02.170 "base_bdevs_list": [ 00:12:02.170 { 00:12:02.170 "name": "BaseBdev1", 00:12:02.170 "uuid": "6c157b28-2b4b-4184-b137-e3e6ad824644", 00:12:02.170 "is_configured": true, 00:12:02.170 "data_offset": 2048, 00:12:02.170 "data_size": 63488 00:12:02.170 }, 00:12:02.170 { 00:12:02.170 "name": null, 00:12:02.170 "uuid": "bfa09996-c352-4ac8-8bd1-d60f06e0c233", 00:12:02.170 "is_configured": false, 00:12:02.170 "data_offset": 0, 00:12:02.170 "data_size": 63488 00:12:02.170 }, 00:12:02.170 { 00:12:02.170 "name": "BaseBdev3", 00:12:02.170 "uuid": "f4865e27-1cc9-4079-b04d-cfb8b188e0d1", 00:12:02.170 "is_configured": true, 00:12:02.170 "data_offset": 2048, 00:12:02.170 "data_size": 63488 00:12:02.170 } 00:12:02.170 ] 00:12:02.170 }' 00:12:02.170 16:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.170 16:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.429 16:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.429 16:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.429 16:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.429 16:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:02.429 16:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.688 16:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:02.688 16:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:02.688 16:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.688 16:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.688 [2024-12-06 16:27:44.294779] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:02.688 16:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.688 16:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:02.688 16:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.688 16:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.688 16:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.688 16:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.688 16:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:02.688 16:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.688 16:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.688 16:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.688 16:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.688 16:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.688 16:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.688 16:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.688 16:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.688 16:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.688 16:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.688 "name": "Existed_Raid", 00:12:02.688 "uuid": "87d6a7f2-a74a-4bbf-89b4-c9d2fb74e9c1", 00:12:02.688 "strip_size_kb": 0, 00:12:02.688 "state": "configuring", 00:12:02.688 "raid_level": "raid1", 00:12:02.688 "superblock": true, 00:12:02.688 "num_base_bdevs": 3, 00:12:02.688 "num_base_bdevs_discovered": 1, 00:12:02.688 "num_base_bdevs_operational": 3, 00:12:02.688 "base_bdevs_list": [ 00:12:02.688 { 00:12:02.688 "name": "BaseBdev1", 00:12:02.688 "uuid": "6c157b28-2b4b-4184-b137-e3e6ad824644", 00:12:02.688 "is_configured": true, 00:12:02.688 "data_offset": 2048, 00:12:02.688 "data_size": 63488 00:12:02.688 }, 00:12:02.688 { 00:12:02.688 "name": null, 00:12:02.688 "uuid": "bfa09996-c352-4ac8-8bd1-d60f06e0c233", 00:12:02.688 "is_configured": false, 00:12:02.688 "data_offset": 0, 00:12:02.688 "data_size": 63488 00:12:02.688 }, 00:12:02.688 { 00:12:02.688 "name": null, 00:12:02.688 "uuid": "f4865e27-1cc9-4079-b04d-cfb8b188e0d1", 00:12:02.688 "is_configured": false, 00:12:02.688 "data_offset": 0, 00:12:02.688 "data_size": 63488 00:12:02.688 } 00:12:02.688 ] 00:12:02.688 }' 00:12:02.688 16:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.688 16:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.948 16:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.948 16:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.948 16:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.948 16:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:03.208 16:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.209 16:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:03.209 16:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:03.209 16:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.209 16:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.209 [2024-12-06 16:27:44.825895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:03.209 16:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.209 16:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:03.209 16:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.209 16:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.209 16:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.209 16:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.209 16:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:03.209 16:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.209 16:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.209 16:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.209 16:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.209 16:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.209 16:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.209 16:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.209 16:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.209 16:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.209 16:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.209 "name": "Existed_Raid", 00:12:03.209 "uuid": "87d6a7f2-a74a-4bbf-89b4-c9d2fb74e9c1", 00:12:03.209 "strip_size_kb": 0, 00:12:03.209 "state": "configuring", 00:12:03.209 "raid_level": "raid1", 00:12:03.209 "superblock": true, 00:12:03.209 "num_base_bdevs": 3, 00:12:03.209 "num_base_bdevs_discovered": 2, 00:12:03.209 "num_base_bdevs_operational": 3, 00:12:03.209 "base_bdevs_list": [ 00:12:03.209 { 00:12:03.209 "name": "BaseBdev1", 00:12:03.209 "uuid": "6c157b28-2b4b-4184-b137-e3e6ad824644", 00:12:03.209 "is_configured": true, 00:12:03.209 "data_offset": 2048, 00:12:03.209 "data_size": 63488 00:12:03.209 }, 00:12:03.209 { 00:12:03.209 "name": null, 00:12:03.209 "uuid": "bfa09996-c352-4ac8-8bd1-d60f06e0c233", 00:12:03.209 "is_configured": false, 00:12:03.209 "data_offset": 0, 00:12:03.209 "data_size": 63488 00:12:03.209 }, 00:12:03.209 { 00:12:03.209 "name": "BaseBdev3", 00:12:03.209 "uuid": "f4865e27-1cc9-4079-b04d-cfb8b188e0d1", 00:12:03.209 "is_configured": true, 00:12:03.209 "data_offset": 2048, 00:12:03.209 "data_size": 63488 00:12:03.209 } 00:12:03.209 ] 00:12:03.209 }' 00:12:03.209 16:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.209 16:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.499 16:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.499 16:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.499 16:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.499 16:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:03.499 16:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.499 16:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:03.499 16:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:03.499 16:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.499 16:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.499 [2024-12-06 16:27:45.333086] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:03.756 16:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.756 16:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:03.756 16:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.756 16:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.756 16:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.756 16:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.756 16:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:03.756 16:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.756 16:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.756 16:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.756 16:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.756 16:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.756 16:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.756 16:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.756 16:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.756 16:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.756 16:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.756 "name": "Existed_Raid", 00:12:03.756 "uuid": "87d6a7f2-a74a-4bbf-89b4-c9d2fb74e9c1", 00:12:03.756 "strip_size_kb": 0, 00:12:03.756 "state": "configuring", 00:12:03.756 "raid_level": "raid1", 00:12:03.756 "superblock": true, 00:12:03.756 "num_base_bdevs": 3, 00:12:03.756 "num_base_bdevs_discovered": 1, 00:12:03.756 "num_base_bdevs_operational": 3, 00:12:03.756 "base_bdevs_list": [ 00:12:03.756 { 00:12:03.756 "name": null, 00:12:03.756 "uuid": "6c157b28-2b4b-4184-b137-e3e6ad824644", 00:12:03.756 "is_configured": false, 00:12:03.756 "data_offset": 0, 00:12:03.756 "data_size": 63488 00:12:03.756 }, 00:12:03.756 { 00:12:03.756 "name": null, 00:12:03.756 "uuid": "bfa09996-c352-4ac8-8bd1-d60f06e0c233", 00:12:03.756 "is_configured": false, 00:12:03.756 "data_offset": 0, 00:12:03.756 "data_size": 63488 00:12:03.756 }, 00:12:03.756 { 00:12:03.756 "name": "BaseBdev3", 00:12:03.756 "uuid": "f4865e27-1cc9-4079-b04d-cfb8b188e0d1", 00:12:03.756 "is_configured": true, 00:12:03.756 "data_offset": 2048, 00:12:03.756 "data_size": 63488 00:12:03.756 } 00:12:03.756 ] 00:12:03.756 }' 00:12:03.756 16:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.756 16:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.013 16:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.013 16:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:04.013 16:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.013 16:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.013 16:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.271 16:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:04.271 16:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:04.271 16:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.271 16:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.272 [2024-12-06 16:27:45.871360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:04.272 16:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.272 16:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:04.272 16:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.272 16:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:04.272 16:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.272 16:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.272 16:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:04.272 16:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.272 16:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.272 16:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.272 16:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.272 16:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.272 16:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.272 16:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.272 16:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.272 16:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.272 16:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.272 "name": "Existed_Raid", 00:12:04.272 "uuid": "87d6a7f2-a74a-4bbf-89b4-c9d2fb74e9c1", 00:12:04.272 "strip_size_kb": 0, 00:12:04.272 "state": "configuring", 00:12:04.272 "raid_level": "raid1", 00:12:04.272 "superblock": true, 00:12:04.272 "num_base_bdevs": 3, 00:12:04.272 "num_base_bdevs_discovered": 2, 00:12:04.272 "num_base_bdevs_operational": 3, 00:12:04.272 "base_bdevs_list": [ 00:12:04.272 { 00:12:04.272 "name": null, 00:12:04.272 "uuid": "6c157b28-2b4b-4184-b137-e3e6ad824644", 00:12:04.272 "is_configured": false, 00:12:04.272 "data_offset": 0, 00:12:04.272 "data_size": 63488 00:12:04.272 }, 00:12:04.272 { 00:12:04.272 "name": "BaseBdev2", 00:12:04.272 "uuid": "bfa09996-c352-4ac8-8bd1-d60f06e0c233", 00:12:04.272 "is_configured": true, 00:12:04.272 "data_offset": 2048, 00:12:04.272 "data_size": 63488 00:12:04.272 }, 00:12:04.272 { 00:12:04.272 "name": "BaseBdev3", 00:12:04.272 "uuid": "f4865e27-1cc9-4079-b04d-cfb8b188e0d1", 00:12:04.272 "is_configured": true, 00:12:04.272 "data_offset": 2048, 00:12:04.272 "data_size": 63488 00:12:04.272 } 00:12:04.272 ] 00:12:04.272 }' 00:12:04.272 16:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.272 16:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.529 16:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.529 16:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:04.529 16:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.529 16:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.787 16:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.787 16:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:04.787 16:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:04.787 16:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.787 16:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.787 16:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.787 16:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.787 16:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6c157b28-2b4b-4184-b137-e3e6ad824644 00:12:04.787 16:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.787 16:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.787 [2024-12-06 16:27:46.473739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:04.787 NewBaseBdev 00:12:04.787 [2024-12-06 16:27:46.473997] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:12:04.787 [2024-12-06 16:27:46.474032] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:04.787 [2024-12-06 16:27:46.474364] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:04.787 [2024-12-06 16:27:46.474506] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:12:04.787 [2024-12-06 16:27:46.474523] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:12:04.787 [2024-12-06 16:27:46.474648] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:04.787 16:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.787 16:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:04.787 16:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:04.788 16:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:04.788 16:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:04.788 16:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:04.788 16:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:04.788 16:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:04.788 16:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.788 16:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.788 16:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.788 16:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:04.788 16:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.788 16:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.788 [ 00:12:04.788 { 00:12:04.788 "name": "NewBaseBdev", 00:12:04.788 "aliases": [ 00:12:04.788 "6c157b28-2b4b-4184-b137-e3e6ad824644" 00:12:04.788 ], 00:12:04.788 "product_name": "Malloc disk", 00:12:04.788 "block_size": 512, 00:12:04.788 "num_blocks": 65536, 00:12:04.788 "uuid": "6c157b28-2b4b-4184-b137-e3e6ad824644", 00:12:04.788 "assigned_rate_limits": { 00:12:04.788 "rw_ios_per_sec": 0, 00:12:04.788 "rw_mbytes_per_sec": 0, 00:12:04.788 "r_mbytes_per_sec": 0, 00:12:04.788 "w_mbytes_per_sec": 0 00:12:04.788 }, 00:12:04.788 "claimed": true, 00:12:04.788 "claim_type": "exclusive_write", 00:12:04.788 "zoned": false, 00:12:04.788 "supported_io_types": { 00:12:04.788 "read": true, 00:12:04.788 "write": true, 00:12:04.788 "unmap": true, 00:12:04.788 "flush": true, 00:12:04.788 "reset": true, 00:12:04.788 "nvme_admin": false, 00:12:04.788 "nvme_io": false, 00:12:04.788 "nvme_io_md": false, 00:12:04.788 "write_zeroes": true, 00:12:04.788 "zcopy": true, 00:12:04.788 "get_zone_info": false, 00:12:04.788 "zone_management": false, 00:12:04.788 "zone_append": false, 00:12:04.788 "compare": false, 00:12:04.788 "compare_and_write": false, 00:12:04.788 "abort": true, 00:12:04.788 "seek_hole": false, 00:12:04.788 "seek_data": false, 00:12:04.788 "copy": true, 00:12:04.788 "nvme_iov_md": false 00:12:04.788 }, 00:12:04.788 "memory_domains": [ 00:12:04.788 { 00:12:04.788 "dma_device_id": "system", 00:12:04.788 "dma_device_type": 1 00:12:04.788 }, 00:12:04.788 { 00:12:04.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.788 "dma_device_type": 2 00:12:04.788 } 00:12:04.788 ], 00:12:04.788 "driver_specific": {} 00:12:04.788 } 00:12:04.788 ] 00:12:04.788 16:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.788 16:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:04.788 16:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:04.788 16:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.788 16:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:04.788 16:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.788 16:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.788 16:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:04.788 16:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.788 16:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.788 16:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.788 16:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.788 16:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.788 16:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.788 16:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.788 16:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.788 16:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.788 16:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.788 "name": "Existed_Raid", 00:12:04.788 "uuid": "87d6a7f2-a74a-4bbf-89b4-c9d2fb74e9c1", 00:12:04.788 "strip_size_kb": 0, 00:12:04.788 "state": "online", 00:12:04.788 "raid_level": "raid1", 00:12:04.788 "superblock": true, 00:12:04.788 "num_base_bdevs": 3, 00:12:04.788 "num_base_bdevs_discovered": 3, 00:12:04.788 "num_base_bdevs_operational": 3, 00:12:04.788 "base_bdevs_list": [ 00:12:04.788 { 00:12:04.788 "name": "NewBaseBdev", 00:12:04.788 "uuid": "6c157b28-2b4b-4184-b137-e3e6ad824644", 00:12:04.788 "is_configured": true, 00:12:04.788 "data_offset": 2048, 00:12:04.788 "data_size": 63488 00:12:04.788 }, 00:12:04.788 { 00:12:04.788 "name": "BaseBdev2", 00:12:04.788 "uuid": "bfa09996-c352-4ac8-8bd1-d60f06e0c233", 00:12:04.788 "is_configured": true, 00:12:04.788 "data_offset": 2048, 00:12:04.788 "data_size": 63488 00:12:04.788 }, 00:12:04.788 { 00:12:04.788 "name": "BaseBdev3", 00:12:04.788 "uuid": "f4865e27-1cc9-4079-b04d-cfb8b188e0d1", 00:12:04.788 "is_configured": true, 00:12:04.788 "data_offset": 2048, 00:12:04.788 "data_size": 63488 00:12:04.788 } 00:12:04.788 ] 00:12:04.788 }' 00:12:04.788 16:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.788 16:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.356 16:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:05.356 16:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:05.356 16:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:05.356 16:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:05.356 16:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:05.356 16:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:05.356 16:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:05.357 16:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:05.357 16:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.357 16:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.357 [2024-12-06 16:27:46.933434] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:05.357 16:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.357 16:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:05.357 "name": "Existed_Raid", 00:12:05.357 "aliases": [ 00:12:05.357 "87d6a7f2-a74a-4bbf-89b4-c9d2fb74e9c1" 00:12:05.357 ], 00:12:05.357 "product_name": "Raid Volume", 00:12:05.357 "block_size": 512, 00:12:05.357 "num_blocks": 63488, 00:12:05.357 "uuid": "87d6a7f2-a74a-4bbf-89b4-c9d2fb74e9c1", 00:12:05.357 "assigned_rate_limits": { 00:12:05.357 "rw_ios_per_sec": 0, 00:12:05.357 "rw_mbytes_per_sec": 0, 00:12:05.357 "r_mbytes_per_sec": 0, 00:12:05.357 "w_mbytes_per_sec": 0 00:12:05.357 }, 00:12:05.357 "claimed": false, 00:12:05.357 "zoned": false, 00:12:05.357 "supported_io_types": { 00:12:05.357 "read": true, 00:12:05.357 "write": true, 00:12:05.357 "unmap": false, 00:12:05.357 "flush": false, 00:12:05.357 "reset": true, 00:12:05.357 "nvme_admin": false, 00:12:05.357 "nvme_io": false, 00:12:05.357 "nvme_io_md": false, 00:12:05.357 "write_zeroes": true, 00:12:05.357 "zcopy": false, 00:12:05.357 "get_zone_info": false, 00:12:05.357 "zone_management": false, 00:12:05.357 "zone_append": false, 00:12:05.357 "compare": false, 00:12:05.357 "compare_and_write": false, 00:12:05.357 "abort": false, 00:12:05.357 "seek_hole": false, 00:12:05.357 "seek_data": false, 00:12:05.357 "copy": false, 00:12:05.357 "nvme_iov_md": false 00:12:05.357 }, 00:12:05.357 "memory_domains": [ 00:12:05.357 { 00:12:05.357 "dma_device_id": "system", 00:12:05.357 "dma_device_type": 1 00:12:05.357 }, 00:12:05.357 { 00:12:05.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.357 "dma_device_type": 2 00:12:05.357 }, 00:12:05.357 { 00:12:05.357 "dma_device_id": "system", 00:12:05.357 "dma_device_type": 1 00:12:05.357 }, 00:12:05.357 { 00:12:05.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.357 "dma_device_type": 2 00:12:05.357 }, 00:12:05.357 { 00:12:05.357 "dma_device_id": "system", 00:12:05.357 "dma_device_type": 1 00:12:05.357 }, 00:12:05.357 { 00:12:05.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.357 "dma_device_type": 2 00:12:05.357 } 00:12:05.357 ], 00:12:05.357 "driver_specific": { 00:12:05.357 "raid": { 00:12:05.357 "uuid": "87d6a7f2-a74a-4bbf-89b4-c9d2fb74e9c1", 00:12:05.357 "strip_size_kb": 0, 00:12:05.357 "state": "online", 00:12:05.357 "raid_level": "raid1", 00:12:05.357 "superblock": true, 00:12:05.357 "num_base_bdevs": 3, 00:12:05.357 "num_base_bdevs_discovered": 3, 00:12:05.357 "num_base_bdevs_operational": 3, 00:12:05.357 "base_bdevs_list": [ 00:12:05.357 { 00:12:05.357 "name": "NewBaseBdev", 00:12:05.357 "uuid": "6c157b28-2b4b-4184-b137-e3e6ad824644", 00:12:05.357 "is_configured": true, 00:12:05.357 "data_offset": 2048, 00:12:05.357 "data_size": 63488 00:12:05.357 }, 00:12:05.357 { 00:12:05.357 "name": "BaseBdev2", 00:12:05.357 "uuid": "bfa09996-c352-4ac8-8bd1-d60f06e0c233", 00:12:05.357 "is_configured": true, 00:12:05.357 "data_offset": 2048, 00:12:05.357 "data_size": 63488 00:12:05.357 }, 00:12:05.357 { 00:12:05.357 "name": "BaseBdev3", 00:12:05.357 "uuid": "f4865e27-1cc9-4079-b04d-cfb8b188e0d1", 00:12:05.357 "is_configured": true, 00:12:05.357 "data_offset": 2048, 00:12:05.357 "data_size": 63488 00:12:05.357 } 00:12:05.357 ] 00:12:05.357 } 00:12:05.357 } 00:12:05.357 }' 00:12:05.357 16:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:05.357 16:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:05.357 BaseBdev2 00:12:05.357 BaseBdev3' 00:12:05.357 16:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.357 16:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:05.357 16:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.357 16:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:05.357 16:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.357 16:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.357 16:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.357 16:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.357 16:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.357 16:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.357 16:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.357 16:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:05.357 16:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.357 16:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.357 16:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.357 16:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.357 16:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.357 16:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.357 16:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.357 16:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.357 16:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:05.357 16:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.357 16:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.617 16:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.617 16:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.617 16:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.617 16:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:05.617 16:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.617 16:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.617 [2024-12-06 16:27:47.232561] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:05.617 [2024-12-06 16:27:47.232649] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:05.617 [2024-12-06 16:27:47.232773] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:05.617 [2024-12-06 16:27:47.233095] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:05.617 [2024-12-06 16:27:47.233158] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:12:05.617 16:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.617 16:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 79448 00:12:05.617 16:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 79448 ']' 00:12:05.617 16:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 79448 00:12:05.617 16:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:05.617 16:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:05.617 16:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79448 00:12:05.617 16:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:05.617 16:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:05.617 16:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79448' 00:12:05.617 killing process with pid 79448 00:12:05.617 16:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 79448 00:12:05.617 [2024-12-06 16:27:47.281614] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:05.617 16:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 79448 00:12:05.617 [2024-12-06 16:27:47.314632] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:05.875 16:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:05.875 00:12:05.875 real 0m9.264s 00:12:05.875 user 0m15.873s 00:12:05.875 sys 0m1.860s 00:12:05.875 16:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:05.875 16:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.875 ************************************ 00:12:05.875 END TEST raid_state_function_test_sb 00:12:05.875 ************************************ 00:12:05.875 16:27:47 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:12:05.875 16:27:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:05.875 16:27:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:05.875 16:27:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:05.875 ************************************ 00:12:05.875 START TEST raid_superblock_test 00:12:05.875 ************************************ 00:12:05.875 16:27:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:12:05.875 16:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:12:05.876 16:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:12:05.876 16:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:05.876 16:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:05.876 16:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:05.876 16:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:05.876 16:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:05.876 16:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:05.876 16:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:05.876 16:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:05.876 16:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:05.876 16:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:05.876 16:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:05.876 16:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:12:05.876 16:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:12:05.876 16:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=80057 00:12:05.876 16:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:05.876 16:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 80057 00:12:05.876 16:27:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 80057 ']' 00:12:05.876 16:27:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.876 16:27:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:05.876 16:27:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.876 16:27:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:05.876 16:27:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.876 [2024-12-06 16:27:47.686219] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:12:05.876 [2024-12-06 16:27:47.686473] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80057 ] 00:12:06.135 [2024-12-06 16:27:47.865039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.135 [2024-12-06 16:27:47.895935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.135 [2024-12-06 16:27:47.941480] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:06.135 [2024-12-06 16:27:47.941522] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:07.072 16:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:07.072 16:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:07.072 16:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:07.072 16:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:07.072 16:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:07.072 16:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:07.072 16:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:07.072 16:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:07.072 16:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:07.072 16:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:07.072 16:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:07.072 16:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.072 16:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.072 malloc1 00:12:07.072 16:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.072 16:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:07.072 16:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.072 16:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.072 [2024-12-06 16:27:48.623234] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:07.072 [2024-12-06 16:27:48.623308] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:07.072 [2024-12-06 16:27:48.623340] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:07.072 [2024-12-06 16:27:48.623359] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:07.072 [2024-12-06 16:27:48.625915] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:07.072 [2024-12-06 16:27:48.626010] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:07.072 pt1 00:12:07.072 16:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.072 16:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:07.072 16:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:07.072 16:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:07.072 16:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:07.072 16:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:07.072 16:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:07.072 16:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:07.072 16:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:07.072 16:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:07.072 16:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.072 16:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.072 malloc2 00:12:07.072 16:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.072 16:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:07.072 16:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.072 16:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.072 [2024-12-06 16:27:48.648670] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:07.072 [2024-12-06 16:27:48.648798] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:07.072 [2024-12-06 16:27:48.648825] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:07.072 [2024-12-06 16:27:48.648858] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:07.072 [2024-12-06 16:27:48.651373] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:07.072 [2024-12-06 16:27:48.651418] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:07.072 pt2 00:12:07.072 16:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.072 16:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:07.072 16:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:07.072 16:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:07.072 16:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:07.072 16:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:07.072 16:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:07.072 16:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:07.072 16:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:07.072 16:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:07.072 16:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.072 16:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.072 malloc3 00:12:07.072 16:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.073 16:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:07.073 16:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.073 16:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.073 [2024-12-06 16:27:48.670061] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:07.073 [2024-12-06 16:27:48.670124] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:07.073 [2024-12-06 16:27:48.670146] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:07.073 [2024-12-06 16:27:48.670159] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:07.073 [2024-12-06 16:27:48.672508] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:07.073 [2024-12-06 16:27:48.672552] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:07.073 pt3 00:12:07.073 16:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.073 16:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:07.073 16:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:07.073 16:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:12:07.073 16:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.073 16:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.073 [2024-12-06 16:27:48.678089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:07.073 [2024-12-06 16:27:48.680188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:07.073 [2024-12-06 16:27:48.680271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:07.073 [2024-12-06 16:27:48.680436] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:12:07.073 [2024-12-06 16:27:48.680451] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:07.073 [2024-12-06 16:27:48.680751] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:12:07.073 [2024-12-06 16:27:48.680925] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:12:07.073 [2024-12-06 16:27:48.680939] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:12:07.073 [2024-12-06 16:27:48.681087] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:07.073 16:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.073 16:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:07.073 16:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:07.073 16:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:07.073 16:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.073 16:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.073 16:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:07.073 16:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.073 16:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.073 16:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.073 16:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.073 16:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.073 16:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.073 16:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.073 16:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.073 16:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.073 16:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.073 "name": "raid_bdev1", 00:12:07.073 "uuid": "51fc5c21-abea-4c36-9fc5-bb37f984240c", 00:12:07.073 "strip_size_kb": 0, 00:12:07.073 "state": "online", 00:12:07.073 "raid_level": "raid1", 00:12:07.073 "superblock": true, 00:12:07.073 "num_base_bdevs": 3, 00:12:07.073 "num_base_bdevs_discovered": 3, 00:12:07.073 "num_base_bdevs_operational": 3, 00:12:07.073 "base_bdevs_list": [ 00:12:07.073 { 00:12:07.073 "name": "pt1", 00:12:07.073 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:07.073 "is_configured": true, 00:12:07.073 "data_offset": 2048, 00:12:07.073 "data_size": 63488 00:12:07.073 }, 00:12:07.073 { 00:12:07.073 "name": "pt2", 00:12:07.073 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:07.073 "is_configured": true, 00:12:07.073 "data_offset": 2048, 00:12:07.073 "data_size": 63488 00:12:07.073 }, 00:12:07.073 { 00:12:07.073 "name": "pt3", 00:12:07.073 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:07.073 "is_configured": true, 00:12:07.073 "data_offset": 2048, 00:12:07.073 "data_size": 63488 00:12:07.073 } 00:12:07.073 ] 00:12:07.073 }' 00:12:07.073 16:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.073 16:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.332 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:07.332 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:07.332 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:07.333 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:07.333 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:07.333 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:07.333 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:07.333 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:07.333 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.333 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.333 [2024-12-06 16:27:49.133718] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:07.333 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.333 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:07.333 "name": "raid_bdev1", 00:12:07.333 "aliases": [ 00:12:07.333 "51fc5c21-abea-4c36-9fc5-bb37f984240c" 00:12:07.333 ], 00:12:07.333 "product_name": "Raid Volume", 00:12:07.333 "block_size": 512, 00:12:07.333 "num_blocks": 63488, 00:12:07.333 "uuid": "51fc5c21-abea-4c36-9fc5-bb37f984240c", 00:12:07.333 "assigned_rate_limits": { 00:12:07.333 "rw_ios_per_sec": 0, 00:12:07.333 "rw_mbytes_per_sec": 0, 00:12:07.333 "r_mbytes_per_sec": 0, 00:12:07.333 "w_mbytes_per_sec": 0 00:12:07.333 }, 00:12:07.333 "claimed": false, 00:12:07.333 "zoned": false, 00:12:07.333 "supported_io_types": { 00:12:07.333 "read": true, 00:12:07.333 "write": true, 00:12:07.333 "unmap": false, 00:12:07.333 "flush": false, 00:12:07.333 "reset": true, 00:12:07.333 "nvme_admin": false, 00:12:07.333 "nvme_io": false, 00:12:07.333 "nvme_io_md": false, 00:12:07.333 "write_zeroes": true, 00:12:07.333 "zcopy": false, 00:12:07.333 "get_zone_info": false, 00:12:07.333 "zone_management": false, 00:12:07.333 "zone_append": false, 00:12:07.333 "compare": false, 00:12:07.333 "compare_and_write": false, 00:12:07.333 "abort": false, 00:12:07.333 "seek_hole": false, 00:12:07.333 "seek_data": false, 00:12:07.333 "copy": false, 00:12:07.333 "nvme_iov_md": false 00:12:07.333 }, 00:12:07.333 "memory_domains": [ 00:12:07.333 { 00:12:07.333 "dma_device_id": "system", 00:12:07.333 "dma_device_type": 1 00:12:07.333 }, 00:12:07.333 { 00:12:07.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.333 "dma_device_type": 2 00:12:07.333 }, 00:12:07.333 { 00:12:07.333 "dma_device_id": "system", 00:12:07.333 "dma_device_type": 1 00:12:07.333 }, 00:12:07.333 { 00:12:07.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.333 "dma_device_type": 2 00:12:07.333 }, 00:12:07.333 { 00:12:07.333 "dma_device_id": "system", 00:12:07.333 "dma_device_type": 1 00:12:07.333 }, 00:12:07.333 { 00:12:07.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.333 "dma_device_type": 2 00:12:07.333 } 00:12:07.333 ], 00:12:07.333 "driver_specific": { 00:12:07.333 "raid": { 00:12:07.333 "uuid": "51fc5c21-abea-4c36-9fc5-bb37f984240c", 00:12:07.333 "strip_size_kb": 0, 00:12:07.333 "state": "online", 00:12:07.333 "raid_level": "raid1", 00:12:07.333 "superblock": true, 00:12:07.333 "num_base_bdevs": 3, 00:12:07.333 "num_base_bdevs_discovered": 3, 00:12:07.333 "num_base_bdevs_operational": 3, 00:12:07.333 "base_bdevs_list": [ 00:12:07.333 { 00:12:07.333 "name": "pt1", 00:12:07.333 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:07.333 "is_configured": true, 00:12:07.333 "data_offset": 2048, 00:12:07.333 "data_size": 63488 00:12:07.333 }, 00:12:07.333 { 00:12:07.333 "name": "pt2", 00:12:07.333 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:07.333 "is_configured": true, 00:12:07.333 "data_offset": 2048, 00:12:07.333 "data_size": 63488 00:12:07.333 }, 00:12:07.333 { 00:12:07.333 "name": "pt3", 00:12:07.333 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:07.333 "is_configured": true, 00:12:07.333 "data_offset": 2048, 00:12:07.333 "data_size": 63488 00:12:07.333 } 00:12:07.333 ] 00:12:07.333 } 00:12:07.333 } 00:12:07.333 }' 00:12:07.593 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:07.593 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:07.593 pt2 00:12:07.593 pt3' 00:12:07.593 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:07.593 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:07.593 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:07.593 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:07.593 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:07.593 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.593 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.593 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.593 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:07.593 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:07.593 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:07.593 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:07.593 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.593 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.593 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:07.593 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.593 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:07.593 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:07.593 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:07.593 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:07.593 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.593 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:07.593 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.593 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.593 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:07.593 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:07.593 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:07.593 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.593 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:07.593 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.593 [2024-12-06 16:27:49.409171] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:07.593 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=51fc5c21-abea-4c36-9fc5-bb37f984240c 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 51fc5c21-abea-4c36-9fc5-bb37f984240c ']' 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.853 [2024-12-06 16:27:49.452792] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:07.853 [2024-12-06 16:27:49.452889] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:07.853 [2024-12-06 16:27:49.452995] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:07.853 [2024-12-06 16:27:49.453107] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:07.853 [2024-12-06 16:27:49.453126] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.853 [2024-12-06 16:27:49.608532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:07.853 [2024-12-06 16:27:49.610717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:07.853 [2024-12-06 16:27:49.610779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:07.853 [2024-12-06 16:27:49.610843] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:07.853 [2024-12-06 16:27:49.610898] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:07.853 [2024-12-06 16:27:49.610920] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:07.853 [2024-12-06 16:27:49.610935] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:07.853 [2024-12-06 16:27:49.610947] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:12:07.853 request: 00:12:07.853 { 00:12:07.853 "name": "raid_bdev1", 00:12:07.853 "raid_level": "raid1", 00:12:07.853 "base_bdevs": [ 00:12:07.853 "malloc1", 00:12:07.853 "malloc2", 00:12:07.853 "malloc3" 00:12:07.853 ], 00:12:07.853 "superblock": false, 00:12:07.853 "method": "bdev_raid_create", 00:12:07.853 "req_id": 1 00:12:07.853 } 00:12:07.853 Got JSON-RPC error response 00:12:07.853 response: 00:12:07.853 { 00:12:07.853 "code": -17, 00:12:07.853 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:07.853 } 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:07.853 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:07.854 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:07.854 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.854 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.854 [2024-12-06 16:27:49.660440] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:07.854 [2024-12-06 16:27:49.660575] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:07.854 [2024-12-06 16:27:49.660637] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:07.854 [2024-12-06 16:27:49.660684] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:07.854 [2024-12-06 16:27:49.663256] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:07.854 [2024-12-06 16:27:49.663347] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:07.854 [2024-12-06 16:27:49.663483] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:07.854 [2024-12-06 16:27:49.663570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:07.854 pt1 00:12:07.854 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.854 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:07.854 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:07.854 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.854 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.854 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.854 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:07.854 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.854 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.854 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.854 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.854 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.854 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.854 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.854 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.114 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.114 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.114 "name": "raid_bdev1", 00:12:08.114 "uuid": "51fc5c21-abea-4c36-9fc5-bb37f984240c", 00:12:08.114 "strip_size_kb": 0, 00:12:08.114 "state": "configuring", 00:12:08.114 "raid_level": "raid1", 00:12:08.114 "superblock": true, 00:12:08.114 "num_base_bdevs": 3, 00:12:08.114 "num_base_bdevs_discovered": 1, 00:12:08.114 "num_base_bdevs_operational": 3, 00:12:08.114 "base_bdevs_list": [ 00:12:08.114 { 00:12:08.114 "name": "pt1", 00:12:08.114 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:08.114 "is_configured": true, 00:12:08.114 "data_offset": 2048, 00:12:08.114 "data_size": 63488 00:12:08.114 }, 00:12:08.114 { 00:12:08.114 "name": null, 00:12:08.114 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:08.114 "is_configured": false, 00:12:08.114 "data_offset": 2048, 00:12:08.114 "data_size": 63488 00:12:08.114 }, 00:12:08.114 { 00:12:08.114 "name": null, 00:12:08.114 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:08.114 "is_configured": false, 00:12:08.114 "data_offset": 2048, 00:12:08.114 "data_size": 63488 00:12:08.114 } 00:12:08.114 ] 00:12:08.114 }' 00:12:08.114 16:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.114 16:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.392 16:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:12:08.392 16:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:08.392 16:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.392 16:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.392 [2024-12-06 16:27:50.075746] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:08.392 [2024-12-06 16:27:50.075841] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.392 [2024-12-06 16:27:50.075883] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:08.392 [2024-12-06 16:27:50.075899] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.392 [2024-12-06 16:27:50.076407] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.392 [2024-12-06 16:27:50.076435] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:08.392 [2024-12-06 16:27:50.076521] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:08.392 [2024-12-06 16:27:50.076551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:08.392 pt2 00:12:08.392 16:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.392 16:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:08.392 16:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.392 16:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.392 [2024-12-06 16:27:50.083737] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:08.392 16:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.392 16:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:08.392 16:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:08.392 16:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.392 16:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.392 16:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.392 16:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:08.392 16:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.392 16:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.392 16:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.392 16:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.392 16:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.392 16:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.392 16:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.392 16:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.392 16:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.392 16:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.392 "name": "raid_bdev1", 00:12:08.392 "uuid": "51fc5c21-abea-4c36-9fc5-bb37f984240c", 00:12:08.392 "strip_size_kb": 0, 00:12:08.392 "state": "configuring", 00:12:08.392 "raid_level": "raid1", 00:12:08.392 "superblock": true, 00:12:08.392 "num_base_bdevs": 3, 00:12:08.392 "num_base_bdevs_discovered": 1, 00:12:08.392 "num_base_bdevs_operational": 3, 00:12:08.392 "base_bdevs_list": [ 00:12:08.392 { 00:12:08.392 "name": "pt1", 00:12:08.392 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:08.392 "is_configured": true, 00:12:08.392 "data_offset": 2048, 00:12:08.392 "data_size": 63488 00:12:08.392 }, 00:12:08.392 { 00:12:08.392 "name": null, 00:12:08.392 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:08.392 "is_configured": false, 00:12:08.392 "data_offset": 0, 00:12:08.392 "data_size": 63488 00:12:08.392 }, 00:12:08.392 { 00:12:08.392 "name": null, 00:12:08.392 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:08.392 "is_configured": false, 00:12:08.392 "data_offset": 2048, 00:12:08.392 "data_size": 63488 00:12:08.392 } 00:12:08.392 ] 00:12:08.392 }' 00:12:08.392 16:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.392 16:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.962 16:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:08.962 16:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:08.962 16:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:08.962 16:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.962 16:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.962 [2024-12-06 16:27:50.587006] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:08.962 [2024-12-06 16:27:50.587098] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.962 [2024-12-06 16:27:50.587123] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:12:08.962 [2024-12-06 16:27:50.587133] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.962 [2024-12-06 16:27:50.587611] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.962 [2024-12-06 16:27:50.587634] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:08.962 [2024-12-06 16:27:50.587722] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:08.962 [2024-12-06 16:27:50.587746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:08.962 pt2 00:12:08.962 16:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.962 16:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:08.962 16:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:08.962 16:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:08.962 16:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.962 16:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.962 [2024-12-06 16:27:50.594963] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:08.963 [2024-12-06 16:27:50.595059] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.963 [2024-12-06 16:27:50.595092] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:08.963 [2024-12-06 16:27:50.595104] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.963 [2024-12-06 16:27:50.595562] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.963 [2024-12-06 16:27:50.595596] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:08.963 [2024-12-06 16:27:50.595683] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:08.963 [2024-12-06 16:27:50.595717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:08.963 [2024-12-06 16:27:50.595841] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:12:08.963 [2024-12-06 16:27:50.595850] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:08.963 [2024-12-06 16:27:50.596110] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:08.963 [2024-12-06 16:27:50.596262] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:12:08.963 [2024-12-06 16:27:50.596278] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:12:08.963 [2024-12-06 16:27:50.596419] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:08.963 pt3 00:12:08.963 16:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.963 16:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:08.963 16:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:08.963 16:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:08.963 16:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:08.963 16:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:08.963 16:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.963 16:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.963 16:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:08.963 16:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.963 16:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.963 16:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.963 16:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.963 16:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.963 16:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.963 16:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.963 16:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.963 16:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.963 16:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.963 "name": "raid_bdev1", 00:12:08.963 "uuid": "51fc5c21-abea-4c36-9fc5-bb37f984240c", 00:12:08.963 "strip_size_kb": 0, 00:12:08.963 "state": "online", 00:12:08.963 "raid_level": "raid1", 00:12:08.963 "superblock": true, 00:12:08.963 "num_base_bdevs": 3, 00:12:08.963 "num_base_bdevs_discovered": 3, 00:12:08.963 "num_base_bdevs_operational": 3, 00:12:08.963 "base_bdevs_list": [ 00:12:08.963 { 00:12:08.963 "name": "pt1", 00:12:08.963 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:08.963 "is_configured": true, 00:12:08.963 "data_offset": 2048, 00:12:08.963 "data_size": 63488 00:12:08.963 }, 00:12:08.963 { 00:12:08.963 "name": "pt2", 00:12:08.963 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:08.963 "is_configured": true, 00:12:08.963 "data_offset": 2048, 00:12:08.963 "data_size": 63488 00:12:08.963 }, 00:12:08.963 { 00:12:08.963 "name": "pt3", 00:12:08.963 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:08.963 "is_configured": true, 00:12:08.963 "data_offset": 2048, 00:12:08.963 "data_size": 63488 00:12:08.963 } 00:12:08.963 ] 00:12:08.963 }' 00:12:08.963 16:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.963 16:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.531 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:09.531 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:09.531 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:09.531 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:09.531 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:09.531 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:09.531 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:09.531 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:09.531 16:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.531 16:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.531 [2024-12-06 16:27:51.074501] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:09.531 16:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.531 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:09.531 "name": "raid_bdev1", 00:12:09.531 "aliases": [ 00:12:09.531 "51fc5c21-abea-4c36-9fc5-bb37f984240c" 00:12:09.531 ], 00:12:09.531 "product_name": "Raid Volume", 00:12:09.531 "block_size": 512, 00:12:09.531 "num_blocks": 63488, 00:12:09.531 "uuid": "51fc5c21-abea-4c36-9fc5-bb37f984240c", 00:12:09.531 "assigned_rate_limits": { 00:12:09.531 "rw_ios_per_sec": 0, 00:12:09.531 "rw_mbytes_per_sec": 0, 00:12:09.531 "r_mbytes_per_sec": 0, 00:12:09.531 "w_mbytes_per_sec": 0 00:12:09.531 }, 00:12:09.531 "claimed": false, 00:12:09.531 "zoned": false, 00:12:09.531 "supported_io_types": { 00:12:09.531 "read": true, 00:12:09.531 "write": true, 00:12:09.531 "unmap": false, 00:12:09.531 "flush": false, 00:12:09.531 "reset": true, 00:12:09.531 "nvme_admin": false, 00:12:09.531 "nvme_io": false, 00:12:09.531 "nvme_io_md": false, 00:12:09.531 "write_zeroes": true, 00:12:09.531 "zcopy": false, 00:12:09.531 "get_zone_info": false, 00:12:09.531 "zone_management": false, 00:12:09.531 "zone_append": false, 00:12:09.531 "compare": false, 00:12:09.531 "compare_and_write": false, 00:12:09.531 "abort": false, 00:12:09.531 "seek_hole": false, 00:12:09.531 "seek_data": false, 00:12:09.531 "copy": false, 00:12:09.532 "nvme_iov_md": false 00:12:09.532 }, 00:12:09.532 "memory_domains": [ 00:12:09.532 { 00:12:09.532 "dma_device_id": "system", 00:12:09.532 "dma_device_type": 1 00:12:09.532 }, 00:12:09.532 { 00:12:09.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.532 "dma_device_type": 2 00:12:09.532 }, 00:12:09.532 { 00:12:09.532 "dma_device_id": "system", 00:12:09.532 "dma_device_type": 1 00:12:09.532 }, 00:12:09.532 { 00:12:09.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.532 "dma_device_type": 2 00:12:09.532 }, 00:12:09.532 { 00:12:09.532 "dma_device_id": "system", 00:12:09.532 "dma_device_type": 1 00:12:09.532 }, 00:12:09.532 { 00:12:09.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.532 "dma_device_type": 2 00:12:09.532 } 00:12:09.532 ], 00:12:09.532 "driver_specific": { 00:12:09.532 "raid": { 00:12:09.532 "uuid": "51fc5c21-abea-4c36-9fc5-bb37f984240c", 00:12:09.532 "strip_size_kb": 0, 00:12:09.532 "state": "online", 00:12:09.532 "raid_level": "raid1", 00:12:09.532 "superblock": true, 00:12:09.532 "num_base_bdevs": 3, 00:12:09.532 "num_base_bdevs_discovered": 3, 00:12:09.532 "num_base_bdevs_operational": 3, 00:12:09.532 "base_bdevs_list": [ 00:12:09.532 { 00:12:09.532 "name": "pt1", 00:12:09.532 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:09.532 "is_configured": true, 00:12:09.532 "data_offset": 2048, 00:12:09.532 "data_size": 63488 00:12:09.532 }, 00:12:09.532 { 00:12:09.532 "name": "pt2", 00:12:09.532 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:09.532 "is_configured": true, 00:12:09.532 "data_offset": 2048, 00:12:09.532 "data_size": 63488 00:12:09.532 }, 00:12:09.532 { 00:12:09.532 "name": "pt3", 00:12:09.532 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:09.532 "is_configured": true, 00:12:09.532 "data_offset": 2048, 00:12:09.532 "data_size": 63488 00:12:09.532 } 00:12:09.532 ] 00:12:09.532 } 00:12:09.532 } 00:12:09.532 }' 00:12:09.532 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:09.532 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:09.532 pt2 00:12:09.532 pt3' 00:12:09.532 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:09.532 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:09.532 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:09.532 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:09.532 16:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.532 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:09.532 16:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.532 16:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.532 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:09.532 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:09.532 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:09.532 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:09.532 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:09.532 16:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.532 16:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.532 16:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.532 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:09.532 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:09.532 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:09.532 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:09.532 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:09.532 16:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.532 16:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.532 16:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.532 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:09.532 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:09.532 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:09.532 16:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.532 16:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.532 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:09.532 [2024-12-06 16:27:51.334053] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:09.532 16:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.791 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 51fc5c21-abea-4c36-9fc5-bb37f984240c '!=' 51fc5c21-abea-4c36-9fc5-bb37f984240c ']' 00:12:09.791 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:12:09.791 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:09.791 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:09.791 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:12:09.792 16:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.792 16:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.792 [2024-12-06 16:27:51.381683] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:09.792 16:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.792 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:09.792 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:09.792 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:09.792 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.792 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.792 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:09.792 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.792 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.792 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.792 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.792 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.792 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.792 16:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.792 16:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.792 16:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.792 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.792 "name": "raid_bdev1", 00:12:09.792 "uuid": "51fc5c21-abea-4c36-9fc5-bb37f984240c", 00:12:09.792 "strip_size_kb": 0, 00:12:09.792 "state": "online", 00:12:09.792 "raid_level": "raid1", 00:12:09.792 "superblock": true, 00:12:09.792 "num_base_bdevs": 3, 00:12:09.792 "num_base_bdevs_discovered": 2, 00:12:09.792 "num_base_bdevs_operational": 2, 00:12:09.792 "base_bdevs_list": [ 00:12:09.792 { 00:12:09.792 "name": null, 00:12:09.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.792 "is_configured": false, 00:12:09.792 "data_offset": 0, 00:12:09.792 "data_size": 63488 00:12:09.792 }, 00:12:09.792 { 00:12:09.792 "name": "pt2", 00:12:09.792 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:09.792 "is_configured": true, 00:12:09.792 "data_offset": 2048, 00:12:09.792 "data_size": 63488 00:12:09.792 }, 00:12:09.792 { 00:12:09.792 "name": "pt3", 00:12:09.792 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:09.792 "is_configured": true, 00:12:09.792 "data_offset": 2048, 00:12:09.792 "data_size": 63488 00:12:09.792 } 00:12:09.792 ] 00:12:09.792 }' 00:12:09.792 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.792 16:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.051 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:10.051 16:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.051 16:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.051 [2024-12-06 16:27:51.796922] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:10.051 [2024-12-06 16:27:51.797000] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:10.051 [2024-12-06 16:27:51.797082] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:10.051 [2024-12-06 16:27:51.797150] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:10.051 [2024-12-06 16:27:51.797177] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:12:10.051 16:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.051 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:12:10.051 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.051 16:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.051 16:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.051 16:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.051 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:12:10.051 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:12:10.051 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:12:10.051 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:10.051 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:12:10.051 16:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.051 16:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.051 16:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.051 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:10.051 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:10.051 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:12:10.051 16:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.051 16:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.051 16:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.051 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:10.051 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:10.051 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:12:10.051 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:10.051 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:10.051 16:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.051 16:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.051 [2024-12-06 16:27:51.868801] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:10.051 [2024-12-06 16:27:51.868872] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.051 [2024-12-06 16:27:51.868892] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:12:10.051 [2024-12-06 16:27:51.868900] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.051 [2024-12-06 16:27:51.871235] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.051 [2024-12-06 16:27:51.871268] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:10.051 [2024-12-06 16:27:51.871346] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:10.051 [2024-12-06 16:27:51.871381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:10.051 pt2 00:12:10.051 16:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.051 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:12:10.051 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:10.051 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:10.051 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.051 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.051 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:10.051 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.051 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.051 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.051 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.051 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.051 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.051 16:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.051 16:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.311 16:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.311 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.311 "name": "raid_bdev1", 00:12:10.311 "uuid": "51fc5c21-abea-4c36-9fc5-bb37f984240c", 00:12:10.311 "strip_size_kb": 0, 00:12:10.311 "state": "configuring", 00:12:10.311 "raid_level": "raid1", 00:12:10.311 "superblock": true, 00:12:10.311 "num_base_bdevs": 3, 00:12:10.311 "num_base_bdevs_discovered": 1, 00:12:10.311 "num_base_bdevs_operational": 2, 00:12:10.311 "base_bdevs_list": [ 00:12:10.311 { 00:12:10.311 "name": null, 00:12:10.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.311 "is_configured": false, 00:12:10.311 "data_offset": 2048, 00:12:10.311 "data_size": 63488 00:12:10.311 }, 00:12:10.311 { 00:12:10.311 "name": "pt2", 00:12:10.311 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:10.311 "is_configured": true, 00:12:10.311 "data_offset": 2048, 00:12:10.311 "data_size": 63488 00:12:10.311 }, 00:12:10.311 { 00:12:10.311 "name": null, 00:12:10.311 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:10.311 "is_configured": false, 00:12:10.311 "data_offset": 2048, 00:12:10.311 "data_size": 63488 00:12:10.311 } 00:12:10.311 ] 00:12:10.311 }' 00:12:10.311 16:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.311 16:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.571 16:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:10.571 16:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:10.571 16:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:12:10.571 16:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:10.571 16:27:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.571 16:27:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.571 [2024-12-06 16:27:52.300098] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:10.571 [2024-12-06 16:27:52.300214] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.571 [2024-12-06 16:27:52.300263] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:10.571 [2024-12-06 16:27:52.300316] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.571 [2024-12-06 16:27:52.300799] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.571 [2024-12-06 16:27:52.300866] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:10.571 [2024-12-06 16:27:52.300999] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:10.571 [2024-12-06 16:27:52.301072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:10.571 [2024-12-06 16:27:52.301243] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:12:10.571 [2024-12-06 16:27:52.301293] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:10.571 [2024-12-06 16:27:52.301616] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:10.571 [2024-12-06 16:27:52.301793] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:12:10.571 [2024-12-06 16:27:52.301846] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:12:10.571 [2024-12-06 16:27:52.302014] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:10.571 pt3 00:12:10.571 16:27:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.571 16:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:10.571 16:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:10.571 16:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:10.571 16:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.571 16:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.571 16:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:10.571 16:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.571 16:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.571 16:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.571 16:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.571 16:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.571 16:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.571 16:27:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.571 16:27:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.571 16:27:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.571 16:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.571 "name": "raid_bdev1", 00:12:10.571 "uuid": "51fc5c21-abea-4c36-9fc5-bb37f984240c", 00:12:10.571 "strip_size_kb": 0, 00:12:10.571 "state": "online", 00:12:10.571 "raid_level": "raid1", 00:12:10.571 "superblock": true, 00:12:10.571 "num_base_bdevs": 3, 00:12:10.571 "num_base_bdevs_discovered": 2, 00:12:10.571 "num_base_bdevs_operational": 2, 00:12:10.571 "base_bdevs_list": [ 00:12:10.571 { 00:12:10.571 "name": null, 00:12:10.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.571 "is_configured": false, 00:12:10.571 "data_offset": 2048, 00:12:10.571 "data_size": 63488 00:12:10.571 }, 00:12:10.571 { 00:12:10.571 "name": "pt2", 00:12:10.571 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:10.571 "is_configured": true, 00:12:10.571 "data_offset": 2048, 00:12:10.571 "data_size": 63488 00:12:10.571 }, 00:12:10.571 { 00:12:10.571 "name": "pt3", 00:12:10.571 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:10.571 "is_configured": true, 00:12:10.571 "data_offset": 2048, 00:12:10.571 "data_size": 63488 00:12:10.571 } 00:12:10.571 ] 00:12:10.571 }' 00:12:10.571 16:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.571 16:27:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.140 16:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:11.140 16:27:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.140 16:27:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.140 [2024-12-06 16:27:52.755349] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:11.140 [2024-12-06 16:27:52.755379] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:11.140 [2024-12-06 16:27:52.755465] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:11.140 [2024-12-06 16:27:52.755526] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:11.140 [2024-12-06 16:27:52.755538] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:12:11.140 16:27:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.140 16:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.140 16:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:12:11.140 16:27:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.140 16:27:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.140 16:27:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.140 16:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:12:11.140 16:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:12:11.140 16:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:12:11.140 16:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:12:11.141 16:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:12:11.141 16:27:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.141 16:27:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.141 16:27:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.141 16:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:11.141 16:27:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.141 16:27:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.141 [2024-12-06 16:27:52.803308] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:11.141 [2024-12-06 16:27:52.803379] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:11.141 [2024-12-06 16:27:52.803401] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:11.141 [2024-12-06 16:27:52.803412] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:11.141 [2024-12-06 16:27:52.805768] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:11.141 [2024-12-06 16:27:52.805864] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:11.141 [2024-12-06 16:27:52.805956] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:11.141 [2024-12-06 16:27:52.806010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:11.141 [2024-12-06 16:27:52.806142] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:11.141 [2024-12-06 16:27:52.806158] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:11.141 [2024-12-06 16:27:52.806173] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:12:11.141 [2024-12-06 16:27:52.806232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:11.141 pt1 00:12:11.141 16:27:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.141 16:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:12:11.141 16:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:12:11.141 16:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:11.141 16:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:11.141 16:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:11.141 16:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:11.141 16:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:11.141 16:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.141 16:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.141 16:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.141 16:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.141 16:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.141 16:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.141 16:27:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.141 16:27:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.141 16:27:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.141 16:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.141 "name": "raid_bdev1", 00:12:11.141 "uuid": "51fc5c21-abea-4c36-9fc5-bb37f984240c", 00:12:11.141 "strip_size_kb": 0, 00:12:11.141 "state": "configuring", 00:12:11.141 "raid_level": "raid1", 00:12:11.141 "superblock": true, 00:12:11.141 "num_base_bdevs": 3, 00:12:11.141 "num_base_bdevs_discovered": 1, 00:12:11.141 "num_base_bdevs_operational": 2, 00:12:11.141 "base_bdevs_list": [ 00:12:11.141 { 00:12:11.141 "name": null, 00:12:11.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.141 "is_configured": false, 00:12:11.141 "data_offset": 2048, 00:12:11.141 "data_size": 63488 00:12:11.141 }, 00:12:11.141 { 00:12:11.141 "name": "pt2", 00:12:11.141 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:11.141 "is_configured": true, 00:12:11.141 "data_offset": 2048, 00:12:11.141 "data_size": 63488 00:12:11.141 }, 00:12:11.141 { 00:12:11.141 "name": null, 00:12:11.141 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:11.141 "is_configured": false, 00:12:11.141 "data_offset": 2048, 00:12:11.141 "data_size": 63488 00:12:11.141 } 00:12:11.141 ] 00:12:11.141 }' 00:12:11.141 16:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.141 16:27:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.709 16:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:11.709 16:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:12:11.709 16:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.709 16:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.709 16:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.709 16:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:12:11.709 16:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:11.709 16:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.709 16:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.709 [2024-12-06 16:27:53.294419] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:11.709 [2024-12-06 16:27:53.294546] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:11.709 [2024-12-06 16:27:53.294607] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:11.709 [2024-12-06 16:27:53.294627] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:11.709 [2024-12-06 16:27:53.295084] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:11.709 [2024-12-06 16:27:53.295112] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:11.709 [2024-12-06 16:27:53.295194] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:11.709 [2024-12-06 16:27:53.295248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:11.710 [2024-12-06 16:27:53.295350] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:12:11.710 [2024-12-06 16:27:53.295364] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:11.710 [2024-12-06 16:27:53.295630] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:11.710 [2024-12-06 16:27:53.295783] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:12:11.710 [2024-12-06 16:27:53.295795] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:12:11.710 [2024-12-06 16:27:53.295912] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:11.710 pt3 00:12:11.710 16:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.710 16:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:11.710 16:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:11.710 16:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:11.710 16:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:11.710 16:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:11.710 16:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:11.710 16:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.710 16:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.710 16:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.710 16:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.710 16:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.710 16:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.710 16:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.710 16:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.710 16:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.710 16:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.710 "name": "raid_bdev1", 00:12:11.710 "uuid": "51fc5c21-abea-4c36-9fc5-bb37f984240c", 00:12:11.710 "strip_size_kb": 0, 00:12:11.710 "state": "online", 00:12:11.710 "raid_level": "raid1", 00:12:11.710 "superblock": true, 00:12:11.710 "num_base_bdevs": 3, 00:12:11.710 "num_base_bdevs_discovered": 2, 00:12:11.710 "num_base_bdevs_operational": 2, 00:12:11.710 "base_bdevs_list": [ 00:12:11.710 { 00:12:11.710 "name": null, 00:12:11.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.710 "is_configured": false, 00:12:11.710 "data_offset": 2048, 00:12:11.710 "data_size": 63488 00:12:11.710 }, 00:12:11.710 { 00:12:11.710 "name": "pt2", 00:12:11.710 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:11.710 "is_configured": true, 00:12:11.710 "data_offset": 2048, 00:12:11.710 "data_size": 63488 00:12:11.710 }, 00:12:11.710 { 00:12:11.710 "name": "pt3", 00:12:11.710 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:11.710 "is_configured": true, 00:12:11.710 "data_offset": 2048, 00:12:11.710 "data_size": 63488 00:12:11.710 } 00:12:11.710 ] 00:12:11.710 }' 00:12:11.710 16:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.710 16:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.969 16:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:11.969 16:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.969 16:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.969 16:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:11.969 16:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.969 16:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:12:11.969 16:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:11.969 16:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.969 16:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.969 16:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:12:11.969 [2024-12-06 16:27:53.801829] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:12.257 16:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.257 16:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 51fc5c21-abea-4c36-9fc5-bb37f984240c '!=' 51fc5c21-abea-4c36-9fc5-bb37f984240c ']' 00:12:12.257 16:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 80057 00:12:12.257 16:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 80057 ']' 00:12:12.257 16:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 80057 00:12:12.257 16:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:12.257 16:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:12.257 16:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80057 00:12:12.257 killing process with pid 80057 00:12:12.258 16:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:12.258 16:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:12.258 16:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80057' 00:12:12.258 16:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 80057 00:12:12.258 16:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 80057 00:12:12.258 [2024-12-06 16:27:53.882116] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:12.258 [2024-12-06 16:27:53.882235] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:12.258 [2024-12-06 16:27:53.882322] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:12.258 [2024-12-06 16:27:53.882332] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:12:12.258 [2024-12-06 16:27:53.916033] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:12.516 ************************************ 00:12:12.516 END TEST raid_superblock_test 00:12:12.516 ************************************ 00:12:12.516 16:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:12.516 00:12:12.516 real 0m6.547s 00:12:12.516 user 0m11.101s 00:12:12.516 sys 0m1.276s 00:12:12.516 16:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:12.516 16:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.516 16:27:54 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:12:12.516 16:27:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:12.516 16:27:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:12.516 16:27:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:12.516 ************************************ 00:12:12.516 START TEST raid_read_error_test 00:12:12.516 ************************************ 00:12:12.516 16:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:12:12.516 16:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:12.516 16:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:12:12.516 16:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:12.516 16:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:12.516 16:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:12.516 16:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:12.516 16:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:12.516 16:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:12.516 16:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:12.516 16:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:12.516 16:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:12.516 16:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:12.516 16:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:12.516 16:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:12.516 16:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:12.516 16:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:12.516 16:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:12.516 16:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:12.516 16:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:12.516 16:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:12.516 16:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:12.516 16:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:12.516 16:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:12.516 16:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:12.516 16:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.qSzrcghG63 00:12:12.516 16:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=80486 00:12:12.516 16:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:12.516 16:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 80486 00:12:12.516 16:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 80486 ']' 00:12:12.516 16:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.516 16:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:12.516 16:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.516 16:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:12.516 16:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.516 [2024-12-06 16:27:54.317031] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:12:12.516 [2024-12-06 16:27:54.317243] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80486 ] 00:12:12.775 [2024-12-06 16:27:54.468797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.775 [2024-12-06 16:27:54.498023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.775 [2024-12-06 16:27:54.540793] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:12.775 [2024-12-06 16:27:54.540916] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:13.342 16:27:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:13.342 16:27:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:13.342 16:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:13.342 16:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:13.342 16:27:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.601 16:27:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.601 BaseBdev1_malloc 00:12:13.601 16:27:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.601 16:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:13.601 16:27:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.601 16:27:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.601 true 00:12:13.601 16:27:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.601 16:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:13.601 16:27:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.601 16:27:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.601 [2024-12-06 16:27:55.212923] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:13.601 [2024-12-06 16:27:55.212984] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.601 [2024-12-06 16:27:55.213024] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:13.601 [2024-12-06 16:27:55.213034] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.601 [2024-12-06 16:27:55.215195] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.601 [2024-12-06 16:27:55.215243] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:13.601 BaseBdev1 00:12:13.601 16:27:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.601 16:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:13.601 16:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:13.601 16:27:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.601 16:27:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.601 BaseBdev2_malloc 00:12:13.601 16:27:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.601 16:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:13.601 16:27:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.601 16:27:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.601 true 00:12:13.601 16:27:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.601 16:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:13.601 16:27:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.601 16:27:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.601 [2024-12-06 16:27:55.253653] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:13.601 [2024-12-06 16:27:55.253711] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.601 [2024-12-06 16:27:55.253732] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:13.601 [2024-12-06 16:27:55.253741] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.601 [2024-12-06 16:27:55.256120] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.602 [2024-12-06 16:27:55.256230] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:13.602 BaseBdev2 00:12:13.602 16:27:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.602 16:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:13.602 16:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:13.602 16:27:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.602 16:27:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.602 BaseBdev3_malloc 00:12:13.602 16:27:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.602 16:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:13.602 16:27:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.602 16:27:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.602 true 00:12:13.602 16:27:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.602 16:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:13.602 16:27:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.602 16:27:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.602 [2024-12-06 16:27:55.294164] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:13.602 [2024-12-06 16:27:55.294228] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.602 [2024-12-06 16:27:55.294261] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:13.602 [2024-12-06 16:27:55.294271] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.602 [2024-12-06 16:27:55.296749] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.602 [2024-12-06 16:27:55.296791] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:13.602 BaseBdev3 00:12:13.602 16:27:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.602 16:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:12:13.602 16:27:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.602 16:27:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.602 [2024-12-06 16:27:55.306219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:13.602 [2024-12-06 16:27:55.308333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:13.602 [2024-12-06 16:27:55.308416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:13.602 [2024-12-06 16:27:55.308612] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:12:13.602 [2024-12-06 16:27:55.308628] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:13.602 [2024-12-06 16:27:55.308890] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:13.602 [2024-12-06 16:27:55.309043] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:12:13.602 [2024-12-06 16:27:55.309053] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:12:13.602 [2024-12-06 16:27:55.309181] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:13.602 16:27:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.602 16:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:13.602 16:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:13.602 16:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:13.602 16:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:13.602 16:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:13.602 16:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:13.602 16:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.602 16:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.602 16:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.602 16:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.602 16:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.602 16:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.602 16:27:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.602 16:27:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.602 16:27:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.602 16:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.602 "name": "raid_bdev1", 00:12:13.602 "uuid": "e101072e-3e75-44f1-aa77-2118065d0408", 00:12:13.602 "strip_size_kb": 0, 00:12:13.602 "state": "online", 00:12:13.602 "raid_level": "raid1", 00:12:13.602 "superblock": true, 00:12:13.602 "num_base_bdevs": 3, 00:12:13.602 "num_base_bdevs_discovered": 3, 00:12:13.602 "num_base_bdevs_operational": 3, 00:12:13.602 "base_bdevs_list": [ 00:12:13.602 { 00:12:13.602 "name": "BaseBdev1", 00:12:13.602 "uuid": "2c93fa59-6f4f-5bd6-b36d-9896cf8dceb9", 00:12:13.602 "is_configured": true, 00:12:13.602 "data_offset": 2048, 00:12:13.602 "data_size": 63488 00:12:13.602 }, 00:12:13.602 { 00:12:13.602 "name": "BaseBdev2", 00:12:13.602 "uuid": "9b6faf6e-b5f6-5469-a5ee-b8074256a640", 00:12:13.602 "is_configured": true, 00:12:13.602 "data_offset": 2048, 00:12:13.602 "data_size": 63488 00:12:13.602 }, 00:12:13.602 { 00:12:13.602 "name": "BaseBdev3", 00:12:13.602 "uuid": "0d2dc298-1add-50a1-867d-e5cca9b0e414", 00:12:13.602 "is_configured": true, 00:12:13.602 "data_offset": 2048, 00:12:13.602 "data_size": 63488 00:12:13.602 } 00:12:13.602 ] 00:12:13.602 }' 00:12:13.602 16:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.602 16:27:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.169 16:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:14.169 16:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:14.169 [2024-12-06 16:27:55.881646] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:12:15.105 16:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:15.105 16:27:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.105 16:27:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.105 16:27:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.105 16:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:15.105 16:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:15.105 16:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:12:15.105 16:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:12:15.105 16:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:15.105 16:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:15.105 16:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:15.105 16:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.105 16:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.105 16:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:15.105 16:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.105 16:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.105 16:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.105 16:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.105 16:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.105 16:27:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.105 16:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.105 16:27:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.105 16:27:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.105 16:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.105 "name": "raid_bdev1", 00:12:15.105 "uuid": "e101072e-3e75-44f1-aa77-2118065d0408", 00:12:15.105 "strip_size_kb": 0, 00:12:15.105 "state": "online", 00:12:15.105 "raid_level": "raid1", 00:12:15.105 "superblock": true, 00:12:15.105 "num_base_bdevs": 3, 00:12:15.105 "num_base_bdevs_discovered": 3, 00:12:15.105 "num_base_bdevs_operational": 3, 00:12:15.105 "base_bdevs_list": [ 00:12:15.105 { 00:12:15.105 "name": "BaseBdev1", 00:12:15.105 "uuid": "2c93fa59-6f4f-5bd6-b36d-9896cf8dceb9", 00:12:15.105 "is_configured": true, 00:12:15.105 "data_offset": 2048, 00:12:15.105 "data_size": 63488 00:12:15.105 }, 00:12:15.105 { 00:12:15.105 "name": "BaseBdev2", 00:12:15.105 "uuid": "9b6faf6e-b5f6-5469-a5ee-b8074256a640", 00:12:15.105 "is_configured": true, 00:12:15.105 "data_offset": 2048, 00:12:15.105 "data_size": 63488 00:12:15.105 }, 00:12:15.105 { 00:12:15.105 "name": "BaseBdev3", 00:12:15.105 "uuid": "0d2dc298-1add-50a1-867d-e5cca9b0e414", 00:12:15.105 "is_configured": true, 00:12:15.105 "data_offset": 2048, 00:12:15.105 "data_size": 63488 00:12:15.105 } 00:12:15.105 ] 00:12:15.105 }' 00:12:15.105 16:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.105 16:27:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.674 16:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:15.674 16:27:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.674 16:27:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.674 [2024-12-06 16:27:57.300044] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:15.674 [2024-12-06 16:27:57.300083] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:15.674 [2024-12-06 16:27:57.303231] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:15.674 [2024-12-06 16:27:57.303322] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:15.674 [2024-12-06 16:27:57.303446] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:15.674 [2024-12-06 16:27:57.303461] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:12:15.674 { 00:12:15.674 "results": [ 00:12:15.674 { 00:12:15.674 "job": "raid_bdev1", 00:12:15.674 "core_mask": "0x1", 00:12:15.674 "workload": "randrw", 00:12:15.674 "percentage": 50, 00:12:15.674 "status": "finished", 00:12:15.674 "queue_depth": 1, 00:12:15.674 "io_size": 131072, 00:12:15.674 "runtime": 1.419219, 00:12:15.674 "iops": 12995.880128436838, 00:12:15.674 "mibps": 1624.4850160546048, 00:12:15.674 "io_failed": 0, 00:12:15.674 "io_timeout": 0, 00:12:15.674 "avg_latency_us": 74.01860180563094, 00:12:15.674 "min_latency_us": 24.705676855895195, 00:12:15.674 "max_latency_us": 1495.3082969432314 00:12:15.674 } 00:12:15.674 ], 00:12:15.674 "core_count": 1 00:12:15.674 } 00:12:15.674 16:27:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.674 16:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 80486 00:12:15.674 16:27:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 80486 ']' 00:12:15.674 16:27:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 80486 00:12:15.674 16:27:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:15.674 16:27:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:15.674 16:27:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80486 00:12:15.674 16:27:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:15.674 16:27:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:15.674 killing process with pid 80486 00:12:15.674 16:27:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80486' 00:12:15.674 16:27:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 80486 00:12:15.674 [2024-12-06 16:27:57.351360] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:15.674 16:27:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 80486 00:12:15.674 [2024-12-06 16:27:57.378148] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:15.934 16:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:15.934 16:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.qSzrcghG63 00:12:15.934 16:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:15.934 16:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:15.934 16:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:15.934 16:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:15.934 16:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:15.934 16:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:15.934 00:12:15.934 real 0m3.398s 00:12:15.934 user 0m4.400s 00:12:15.934 sys 0m0.533s 00:12:15.934 16:27:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:15.934 16:27:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.934 ************************************ 00:12:15.934 END TEST raid_read_error_test 00:12:15.934 ************************************ 00:12:15.934 16:27:57 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:12:15.934 16:27:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:15.934 16:27:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:15.934 16:27:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:15.934 ************************************ 00:12:15.934 START TEST raid_write_error_test 00:12:15.934 ************************************ 00:12:15.934 16:27:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:12:15.934 16:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:15.934 16:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:12:15.934 16:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:15.934 16:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:15.934 16:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:15.934 16:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:15.934 16:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:15.935 16:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:15.935 16:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:15.935 16:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:15.935 16:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:15.935 16:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:15.935 16:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:15.935 16:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:15.935 16:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:15.935 16:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:15.935 16:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:15.935 16:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:15.935 16:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:15.935 16:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:15.935 16:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:15.935 16:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:15.935 16:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:15.935 16:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:15.935 16:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.aaasNN11mW 00:12:15.935 16:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=80621 00:12:15.935 16:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 80621 00:12:15.935 16:27:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 80621 ']' 00:12:15.935 16:27:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.935 16:27:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:15.935 16:27:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.935 16:27:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:15.935 16:27:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:15.935 16:27:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.193 [2024-12-06 16:27:57.782098] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:12:16.193 [2024-12-06 16:27:57.782266] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80621 ] 00:12:16.193 [2024-12-06 16:27:57.957344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.193 [2024-12-06 16:27:57.986870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.452 [2024-12-06 16:27:58.031230] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:16.452 [2024-12-06 16:27:58.031270] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:17.022 16:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:17.022 16:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:17.022 16:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:17.022 16:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:17.022 16:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.022 16:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.022 BaseBdev1_malloc 00:12:17.022 16:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.022 16:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:17.022 16:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.022 16:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.022 true 00:12:17.022 16:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.022 16:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:17.022 16:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.022 16:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.022 [2024-12-06 16:27:58.679532] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:17.022 [2024-12-06 16:27:58.679612] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.022 [2024-12-06 16:27:58.679660] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:17.022 [2024-12-06 16:27:58.679675] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.022 [2024-12-06 16:27:58.682183] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.022 [2024-12-06 16:27:58.682282] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:17.022 BaseBdev1 00:12:17.022 16:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.022 16:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:17.022 16:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:17.022 16:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.022 16:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.022 BaseBdev2_malloc 00:12:17.022 16:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.022 16:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:17.022 16:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.022 16:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.022 true 00:12:17.022 16:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.022 16:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:17.022 16:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.022 16:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.022 [2024-12-06 16:27:58.720553] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:17.022 [2024-12-06 16:27:58.720613] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.022 [2024-12-06 16:27:58.720633] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:17.022 [2024-12-06 16:27:58.720643] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.022 [2024-12-06 16:27:58.722953] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.022 BaseBdev2 00:12:17.022 [2024-12-06 16:27:58.723080] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:17.022 16:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.022 16:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:17.022 16:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:17.022 16:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.022 16:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.022 BaseBdev3_malloc 00:12:17.022 16:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.022 16:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:17.022 16:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.022 16:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.022 true 00:12:17.022 16:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.022 16:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:17.022 16:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.022 16:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.022 [2024-12-06 16:27:58.761509] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:17.022 [2024-12-06 16:27:58.761635] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.022 [2024-12-06 16:27:58.761684] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:17.022 [2024-12-06 16:27:58.761727] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.022 [2024-12-06 16:27:58.764132] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.022 [2024-12-06 16:27:58.764219] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:17.022 BaseBdev3 00:12:17.022 16:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.022 16:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:12:17.022 16:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.022 16:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.023 [2024-12-06 16:27:58.773541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:17.023 [2024-12-06 16:27:58.775643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:17.023 [2024-12-06 16:27:58.775770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:17.023 [2024-12-06 16:27:58.776001] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:12:17.023 [2024-12-06 16:27:58.776056] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:17.023 [2024-12-06 16:27:58.776374] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:17.023 [2024-12-06 16:27:58.776595] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:12:17.023 [2024-12-06 16:27:58.776643] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:12:17.023 [2024-12-06 16:27:58.776851] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:17.023 16:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.023 16:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:17.023 16:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:17.023 16:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:17.023 16:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.023 16:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.023 16:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:17.023 16:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.023 16:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.023 16:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.023 16:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.023 16:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.023 16:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.023 16:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.023 16:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.023 16:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.023 16:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.023 "name": "raid_bdev1", 00:12:17.023 "uuid": "bcf7d426-9072-4847-88cb-7a773f847602", 00:12:17.023 "strip_size_kb": 0, 00:12:17.023 "state": "online", 00:12:17.023 "raid_level": "raid1", 00:12:17.023 "superblock": true, 00:12:17.023 "num_base_bdevs": 3, 00:12:17.023 "num_base_bdevs_discovered": 3, 00:12:17.023 "num_base_bdevs_operational": 3, 00:12:17.023 "base_bdevs_list": [ 00:12:17.023 { 00:12:17.023 "name": "BaseBdev1", 00:12:17.023 "uuid": "22d52233-beaa-5215-a48a-92bd39083971", 00:12:17.023 "is_configured": true, 00:12:17.023 "data_offset": 2048, 00:12:17.023 "data_size": 63488 00:12:17.023 }, 00:12:17.023 { 00:12:17.023 "name": "BaseBdev2", 00:12:17.023 "uuid": "b1b9b8c5-35b1-5ff9-9452-13c19f568e98", 00:12:17.023 "is_configured": true, 00:12:17.023 "data_offset": 2048, 00:12:17.023 "data_size": 63488 00:12:17.023 }, 00:12:17.023 { 00:12:17.023 "name": "BaseBdev3", 00:12:17.023 "uuid": "5e6c7e63-cfcf-5228-9f33-bd6f96fc679a", 00:12:17.023 "is_configured": true, 00:12:17.023 "data_offset": 2048, 00:12:17.023 "data_size": 63488 00:12:17.023 } 00:12:17.023 ] 00:12:17.023 }' 00:12:17.023 16:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.023 16:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.592 16:27:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:17.592 16:27:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:17.592 [2024-12-06 16:27:59.321004] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:12:18.532 16:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:18.532 16:28:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.532 16:28:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.532 [2024-12-06 16:28:00.229238] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:18.532 [2024-12-06 16:28:00.229297] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:18.532 [2024-12-06 16:28:00.229515] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006560 00:12:18.532 16:28:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.532 16:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:18.532 16:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:18.532 16:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:18.532 16:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:12:18.532 16:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:18.532 16:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:18.532 16:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:18.532 16:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.532 16:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.532 16:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:18.532 16:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.532 16:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.532 16:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.532 16:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.532 16:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.532 16:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.532 16:28:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.532 16:28:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.532 16:28:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.532 16:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.532 "name": "raid_bdev1", 00:12:18.532 "uuid": "bcf7d426-9072-4847-88cb-7a773f847602", 00:12:18.532 "strip_size_kb": 0, 00:12:18.532 "state": "online", 00:12:18.532 "raid_level": "raid1", 00:12:18.532 "superblock": true, 00:12:18.532 "num_base_bdevs": 3, 00:12:18.532 "num_base_bdevs_discovered": 2, 00:12:18.532 "num_base_bdevs_operational": 2, 00:12:18.532 "base_bdevs_list": [ 00:12:18.532 { 00:12:18.532 "name": null, 00:12:18.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.532 "is_configured": false, 00:12:18.532 "data_offset": 0, 00:12:18.532 "data_size": 63488 00:12:18.532 }, 00:12:18.532 { 00:12:18.532 "name": "BaseBdev2", 00:12:18.532 "uuid": "b1b9b8c5-35b1-5ff9-9452-13c19f568e98", 00:12:18.532 "is_configured": true, 00:12:18.532 "data_offset": 2048, 00:12:18.532 "data_size": 63488 00:12:18.532 }, 00:12:18.532 { 00:12:18.532 "name": "BaseBdev3", 00:12:18.532 "uuid": "5e6c7e63-cfcf-5228-9f33-bd6f96fc679a", 00:12:18.532 "is_configured": true, 00:12:18.532 "data_offset": 2048, 00:12:18.532 "data_size": 63488 00:12:18.532 } 00:12:18.532 ] 00:12:18.532 }' 00:12:18.532 16:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.532 16:28:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.103 16:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:19.103 16:28:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.103 16:28:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.103 [2024-12-06 16:28:00.655287] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:19.103 [2024-12-06 16:28:00.655386] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:19.103 [2024-12-06 16:28:00.658194] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:19.103 [2024-12-06 16:28:00.658297] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:19.103 [2024-12-06 16:28:00.658417] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:19.103 [2024-12-06 16:28:00.658465] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:12:19.103 { 00:12:19.103 "results": [ 00:12:19.103 { 00:12:19.103 "job": "raid_bdev1", 00:12:19.103 "core_mask": "0x1", 00:12:19.103 "workload": "randrw", 00:12:19.103 "percentage": 50, 00:12:19.103 "status": "finished", 00:12:19.103 "queue_depth": 1, 00:12:19.103 "io_size": 131072, 00:12:19.103 "runtime": 1.335013, 00:12:19.103 "iops": 14357.912619577488, 00:12:19.103 "mibps": 1794.739077447186, 00:12:19.103 "io_failed": 0, 00:12:19.103 "io_timeout": 0, 00:12:19.103 "avg_latency_us": 66.73459842094903, 00:12:19.103 "min_latency_us": 24.705676855895195, 00:12:19.103 "max_latency_us": 1459.5353711790392 00:12:19.103 } 00:12:19.103 ], 00:12:19.103 "core_count": 1 00:12:19.103 } 00:12:19.103 16:28:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.103 16:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 80621 00:12:19.103 16:28:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 80621 ']' 00:12:19.103 16:28:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 80621 00:12:19.103 16:28:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:19.103 16:28:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:19.103 16:28:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80621 00:12:19.103 killing process with pid 80621 00:12:19.103 16:28:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:19.103 16:28:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:19.103 16:28:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80621' 00:12:19.103 16:28:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 80621 00:12:19.103 [2024-12-06 16:28:00.705763] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:19.103 16:28:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 80621 00:12:19.103 [2024-12-06 16:28:00.730626] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:19.103 16:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.aaasNN11mW 00:12:19.103 16:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:19.103 16:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:19.364 16:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:19.364 16:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:19.364 16:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:19.364 16:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:19.364 16:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:19.364 00:12:19.364 real 0m3.267s 00:12:19.364 user 0m4.157s 00:12:19.364 sys 0m0.544s 00:12:19.364 ************************************ 00:12:19.364 END TEST raid_write_error_test 00:12:19.364 ************************************ 00:12:19.364 16:28:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:19.364 16:28:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.364 16:28:00 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:12:19.364 16:28:00 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:19.364 16:28:00 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:12:19.364 16:28:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:19.364 16:28:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:19.364 16:28:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:19.364 ************************************ 00:12:19.364 START TEST raid_state_function_test 00:12:19.364 ************************************ 00:12:19.364 16:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:12:19.364 16:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:12:19.364 16:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:19.364 16:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:19.364 16:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:19.364 16:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:19.364 16:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:19.364 16:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:19.364 16:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:19.364 16:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:19.364 16:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:19.364 16:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:19.364 16:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:19.364 16:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:19.364 16:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:19.364 16:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:19.364 16:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:19.364 16:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:19.364 16:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:19.364 16:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:19.364 16:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:19.364 16:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:19.364 16:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:19.364 16:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:19.364 16:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:19.364 16:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:12:19.364 16:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:19.364 16:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:19.364 16:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:19.364 16:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:19.364 16:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80748 00:12:19.364 16:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:19.364 16:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80748' 00:12:19.364 Process raid pid: 80748 00:12:19.364 16:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80748 00:12:19.364 16:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80748 ']' 00:12:19.364 16:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.364 16:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:19.364 16:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.364 16:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:19.364 16:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.364 [2024-12-06 16:28:01.110193] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:12:19.364 [2024-12-06 16:28:01.110321] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:19.625 [2024-12-06 16:28:01.282690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.625 [2024-12-06 16:28:01.308949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.625 [2024-12-06 16:28:01.352003] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:19.625 [2024-12-06 16:28:01.352140] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:20.194 16:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:20.194 16:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:20.194 16:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:20.194 16:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.194 16:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.194 [2024-12-06 16:28:01.954666] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:20.194 [2024-12-06 16:28:01.954730] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:20.194 [2024-12-06 16:28:01.954758] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:20.194 [2024-12-06 16:28:01.954771] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:20.194 [2024-12-06 16:28:01.954779] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:20.194 [2024-12-06 16:28:01.954792] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:20.194 [2024-12-06 16:28:01.954800] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:20.194 [2024-12-06 16:28:01.954810] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:20.194 16:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.194 16:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:20.194 16:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.194 16:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:20.194 16:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:20.194 16:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:20.194 16:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:20.194 16:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.194 16:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.194 16:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.194 16:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.194 16:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.194 16:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.194 16:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.194 16:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.194 16:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.194 16:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.194 "name": "Existed_Raid", 00:12:20.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.194 "strip_size_kb": 64, 00:12:20.194 "state": "configuring", 00:12:20.194 "raid_level": "raid0", 00:12:20.194 "superblock": false, 00:12:20.194 "num_base_bdevs": 4, 00:12:20.194 "num_base_bdevs_discovered": 0, 00:12:20.194 "num_base_bdevs_operational": 4, 00:12:20.194 "base_bdevs_list": [ 00:12:20.194 { 00:12:20.194 "name": "BaseBdev1", 00:12:20.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.194 "is_configured": false, 00:12:20.194 "data_offset": 0, 00:12:20.194 "data_size": 0 00:12:20.194 }, 00:12:20.194 { 00:12:20.194 "name": "BaseBdev2", 00:12:20.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.194 "is_configured": false, 00:12:20.194 "data_offset": 0, 00:12:20.194 "data_size": 0 00:12:20.194 }, 00:12:20.194 { 00:12:20.194 "name": "BaseBdev3", 00:12:20.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.194 "is_configured": false, 00:12:20.194 "data_offset": 0, 00:12:20.194 "data_size": 0 00:12:20.194 }, 00:12:20.194 { 00:12:20.194 "name": "BaseBdev4", 00:12:20.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.194 "is_configured": false, 00:12:20.194 "data_offset": 0, 00:12:20.194 "data_size": 0 00:12:20.194 } 00:12:20.194 ] 00:12:20.194 }' 00:12:20.194 16:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.194 16:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.762 16:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:20.762 16:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.763 16:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.763 [2024-12-06 16:28:02.449714] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:20.763 [2024-12-06 16:28:02.449811] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:12:20.763 16:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.763 16:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:20.763 16:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.763 16:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.763 [2024-12-06 16:28:02.457706] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:20.763 [2024-12-06 16:28:02.457785] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:20.763 [2024-12-06 16:28:02.457814] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:20.763 [2024-12-06 16:28:02.457836] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:20.763 [2024-12-06 16:28:02.457855] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:20.763 [2024-12-06 16:28:02.457877] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:20.763 [2024-12-06 16:28:02.457895] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:20.763 [2024-12-06 16:28:02.457916] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:20.763 16:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.763 16:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:20.763 16:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.763 16:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.763 [2024-12-06 16:28:02.474714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:20.763 BaseBdev1 00:12:20.763 16:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.763 16:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:20.763 16:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:20.763 16:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:20.763 16:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:20.763 16:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:20.763 16:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:20.763 16:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:20.763 16:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.763 16:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.763 16:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.763 16:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:20.763 16:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.763 16:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.763 [ 00:12:20.763 { 00:12:20.763 "name": "BaseBdev1", 00:12:20.763 "aliases": [ 00:12:20.763 "68c68a10-873e-4e1a-a832-ec130d8d629e" 00:12:20.763 ], 00:12:20.763 "product_name": "Malloc disk", 00:12:20.763 "block_size": 512, 00:12:20.763 "num_blocks": 65536, 00:12:20.763 "uuid": "68c68a10-873e-4e1a-a832-ec130d8d629e", 00:12:20.763 "assigned_rate_limits": { 00:12:20.763 "rw_ios_per_sec": 0, 00:12:20.763 "rw_mbytes_per_sec": 0, 00:12:20.763 "r_mbytes_per_sec": 0, 00:12:20.763 "w_mbytes_per_sec": 0 00:12:20.763 }, 00:12:20.763 "claimed": true, 00:12:20.763 "claim_type": "exclusive_write", 00:12:20.763 "zoned": false, 00:12:20.763 "supported_io_types": { 00:12:20.763 "read": true, 00:12:20.763 "write": true, 00:12:20.763 "unmap": true, 00:12:20.763 "flush": true, 00:12:20.763 "reset": true, 00:12:20.763 "nvme_admin": false, 00:12:20.763 "nvme_io": false, 00:12:20.763 "nvme_io_md": false, 00:12:20.763 "write_zeroes": true, 00:12:20.763 "zcopy": true, 00:12:20.763 "get_zone_info": false, 00:12:20.763 "zone_management": false, 00:12:20.763 "zone_append": false, 00:12:20.763 "compare": false, 00:12:20.763 "compare_and_write": false, 00:12:20.763 "abort": true, 00:12:20.763 "seek_hole": false, 00:12:20.763 "seek_data": false, 00:12:20.763 "copy": true, 00:12:20.763 "nvme_iov_md": false 00:12:20.763 }, 00:12:20.763 "memory_domains": [ 00:12:20.763 { 00:12:20.763 "dma_device_id": "system", 00:12:20.763 "dma_device_type": 1 00:12:20.763 }, 00:12:20.763 { 00:12:20.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.763 "dma_device_type": 2 00:12:20.763 } 00:12:20.763 ], 00:12:20.763 "driver_specific": {} 00:12:20.763 } 00:12:20.763 ] 00:12:20.763 16:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.763 16:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:20.763 16:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:20.763 16:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.763 16:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:20.763 16:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:20.763 16:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:20.763 16:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:20.763 16:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.763 16:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.763 16:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.763 16:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.763 16:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.763 16:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.763 16:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.763 16:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.763 16:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.763 16:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.763 "name": "Existed_Raid", 00:12:20.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.763 "strip_size_kb": 64, 00:12:20.763 "state": "configuring", 00:12:20.763 "raid_level": "raid0", 00:12:20.763 "superblock": false, 00:12:20.763 "num_base_bdevs": 4, 00:12:20.763 "num_base_bdevs_discovered": 1, 00:12:20.763 "num_base_bdevs_operational": 4, 00:12:20.763 "base_bdevs_list": [ 00:12:20.763 { 00:12:20.763 "name": "BaseBdev1", 00:12:20.763 "uuid": "68c68a10-873e-4e1a-a832-ec130d8d629e", 00:12:20.763 "is_configured": true, 00:12:20.763 "data_offset": 0, 00:12:20.763 "data_size": 65536 00:12:20.763 }, 00:12:20.763 { 00:12:20.763 "name": "BaseBdev2", 00:12:20.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.763 "is_configured": false, 00:12:20.763 "data_offset": 0, 00:12:20.763 "data_size": 0 00:12:20.763 }, 00:12:20.763 { 00:12:20.763 "name": "BaseBdev3", 00:12:20.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.763 "is_configured": false, 00:12:20.763 "data_offset": 0, 00:12:20.763 "data_size": 0 00:12:20.764 }, 00:12:20.764 { 00:12:20.764 "name": "BaseBdev4", 00:12:20.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.764 "is_configured": false, 00:12:20.764 "data_offset": 0, 00:12:20.764 "data_size": 0 00:12:20.764 } 00:12:20.764 ] 00:12:20.764 }' 00:12:20.764 16:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.764 16:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.331 16:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:21.331 16:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.331 16:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.331 [2024-12-06 16:28:02.985903] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:21.331 [2024-12-06 16:28:02.985957] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:12:21.331 16:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.331 16:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:21.331 16:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.331 16:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.331 [2024-12-06 16:28:02.997915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:21.331 [2024-12-06 16:28:03.000016] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:21.331 [2024-12-06 16:28:03.000065] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:21.331 [2024-12-06 16:28:03.000076] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:21.331 [2024-12-06 16:28:03.000087] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:21.331 [2024-12-06 16:28:03.000094] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:21.331 [2024-12-06 16:28:03.000103] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:21.331 16:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.331 16:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:21.331 16:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:21.331 16:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:21.331 16:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:21.331 16:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:21.331 16:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:21.331 16:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:21.331 16:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:21.331 16:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.331 16:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.331 16:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.331 16:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.331 16:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.331 16:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.331 16:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.331 16:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.331 16:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.331 16:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.331 "name": "Existed_Raid", 00:12:21.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.331 "strip_size_kb": 64, 00:12:21.331 "state": "configuring", 00:12:21.331 "raid_level": "raid0", 00:12:21.331 "superblock": false, 00:12:21.331 "num_base_bdevs": 4, 00:12:21.331 "num_base_bdevs_discovered": 1, 00:12:21.331 "num_base_bdevs_operational": 4, 00:12:21.331 "base_bdevs_list": [ 00:12:21.331 { 00:12:21.331 "name": "BaseBdev1", 00:12:21.331 "uuid": "68c68a10-873e-4e1a-a832-ec130d8d629e", 00:12:21.331 "is_configured": true, 00:12:21.331 "data_offset": 0, 00:12:21.331 "data_size": 65536 00:12:21.331 }, 00:12:21.331 { 00:12:21.331 "name": "BaseBdev2", 00:12:21.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.331 "is_configured": false, 00:12:21.331 "data_offset": 0, 00:12:21.331 "data_size": 0 00:12:21.331 }, 00:12:21.331 { 00:12:21.331 "name": "BaseBdev3", 00:12:21.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.331 "is_configured": false, 00:12:21.331 "data_offset": 0, 00:12:21.331 "data_size": 0 00:12:21.331 }, 00:12:21.331 { 00:12:21.331 "name": "BaseBdev4", 00:12:21.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.331 "is_configured": false, 00:12:21.331 "data_offset": 0, 00:12:21.331 "data_size": 0 00:12:21.331 } 00:12:21.331 ] 00:12:21.331 }' 00:12:21.331 16:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.331 16:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.900 16:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:21.900 16:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.900 16:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.900 [2024-12-06 16:28:03.468160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:21.900 BaseBdev2 00:12:21.900 16:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.900 16:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:21.900 16:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:21.900 16:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:21.900 16:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:21.900 16:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:21.900 16:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:21.900 16:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:21.900 16:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.900 16:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.900 16:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.900 16:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:21.900 16:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.900 16:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.900 [ 00:12:21.900 { 00:12:21.900 "name": "BaseBdev2", 00:12:21.900 "aliases": [ 00:12:21.900 "c2e02982-8a38-4da0-8c16-5f09c687c211" 00:12:21.900 ], 00:12:21.900 "product_name": "Malloc disk", 00:12:21.900 "block_size": 512, 00:12:21.900 "num_blocks": 65536, 00:12:21.900 "uuid": "c2e02982-8a38-4da0-8c16-5f09c687c211", 00:12:21.900 "assigned_rate_limits": { 00:12:21.900 "rw_ios_per_sec": 0, 00:12:21.900 "rw_mbytes_per_sec": 0, 00:12:21.900 "r_mbytes_per_sec": 0, 00:12:21.900 "w_mbytes_per_sec": 0 00:12:21.900 }, 00:12:21.900 "claimed": true, 00:12:21.900 "claim_type": "exclusive_write", 00:12:21.900 "zoned": false, 00:12:21.900 "supported_io_types": { 00:12:21.900 "read": true, 00:12:21.900 "write": true, 00:12:21.900 "unmap": true, 00:12:21.900 "flush": true, 00:12:21.900 "reset": true, 00:12:21.900 "nvme_admin": false, 00:12:21.900 "nvme_io": false, 00:12:21.900 "nvme_io_md": false, 00:12:21.900 "write_zeroes": true, 00:12:21.900 "zcopy": true, 00:12:21.900 "get_zone_info": false, 00:12:21.900 "zone_management": false, 00:12:21.900 "zone_append": false, 00:12:21.900 "compare": false, 00:12:21.900 "compare_and_write": false, 00:12:21.900 "abort": true, 00:12:21.900 "seek_hole": false, 00:12:21.900 "seek_data": false, 00:12:21.900 "copy": true, 00:12:21.900 "nvme_iov_md": false 00:12:21.900 }, 00:12:21.900 "memory_domains": [ 00:12:21.900 { 00:12:21.900 "dma_device_id": "system", 00:12:21.900 "dma_device_type": 1 00:12:21.900 }, 00:12:21.900 { 00:12:21.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.900 "dma_device_type": 2 00:12:21.900 } 00:12:21.900 ], 00:12:21.900 "driver_specific": {} 00:12:21.900 } 00:12:21.900 ] 00:12:21.900 16:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.900 16:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:21.900 16:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:21.900 16:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:21.900 16:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:21.900 16:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:21.900 16:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:21.900 16:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:21.900 16:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:21.900 16:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:21.900 16:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.900 16:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.900 16:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.900 16:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.900 16:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.900 16:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.900 16:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.900 16:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.900 16:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.900 16:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.900 "name": "Existed_Raid", 00:12:21.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.900 "strip_size_kb": 64, 00:12:21.900 "state": "configuring", 00:12:21.900 "raid_level": "raid0", 00:12:21.900 "superblock": false, 00:12:21.900 "num_base_bdevs": 4, 00:12:21.900 "num_base_bdevs_discovered": 2, 00:12:21.900 "num_base_bdevs_operational": 4, 00:12:21.900 "base_bdevs_list": [ 00:12:21.900 { 00:12:21.900 "name": "BaseBdev1", 00:12:21.900 "uuid": "68c68a10-873e-4e1a-a832-ec130d8d629e", 00:12:21.900 "is_configured": true, 00:12:21.900 "data_offset": 0, 00:12:21.900 "data_size": 65536 00:12:21.900 }, 00:12:21.900 { 00:12:21.900 "name": "BaseBdev2", 00:12:21.900 "uuid": "c2e02982-8a38-4da0-8c16-5f09c687c211", 00:12:21.900 "is_configured": true, 00:12:21.900 "data_offset": 0, 00:12:21.900 "data_size": 65536 00:12:21.900 }, 00:12:21.900 { 00:12:21.900 "name": "BaseBdev3", 00:12:21.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.900 "is_configured": false, 00:12:21.900 "data_offset": 0, 00:12:21.900 "data_size": 0 00:12:21.900 }, 00:12:21.900 { 00:12:21.900 "name": "BaseBdev4", 00:12:21.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.900 "is_configured": false, 00:12:21.900 "data_offset": 0, 00:12:21.900 "data_size": 0 00:12:21.900 } 00:12:21.900 ] 00:12:21.900 }' 00:12:21.900 16:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.900 16:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.160 16:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:22.160 16:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.160 16:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.160 [2024-12-06 16:28:03.989862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:22.160 BaseBdev3 00:12:22.160 16:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.160 16:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:22.160 16:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:22.160 16:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:22.160 16:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:22.160 16:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:22.160 16:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:22.160 16:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:22.160 16:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.160 16:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.430 16:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.430 16:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:22.430 16:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.430 16:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.430 [ 00:12:22.430 { 00:12:22.430 "name": "BaseBdev3", 00:12:22.430 "aliases": [ 00:12:22.430 "3279881d-6128-4a01-a251-d6aa32811f24" 00:12:22.430 ], 00:12:22.430 "product_name": "Malloc disk", 00:12:22.430 "block_size": 512, 00:12:22.430 "num_blocks": 65536, 00:12:22.430 "uuid": "3279881d-6128-4a01-a251-d6aa32811f24", 00:12:22.430 "assigned_rate_limits": { 00:12:22.430 "rw_ios_per_sec": 0, 00:12:22.430 "rw_mbytes_per_sec": 0, 00:12:22.430 "r_mbytes_per_sec": 0, 00:12:22.430 "w_mbytes_per_sec": 0 00:12:22.430 }, 00:12:22.430 "claimed": true, 00:12:22.430 "claim_type": "exclusive_write", 00:12:22.430 "zoned": false, 00:12:22.430 "supported_io_types": { 00:12:22.430 "read": true, 00:12:22.430 "write": true, 00:12:22.430 "unmap": true, 00:12:22.430 "flush": true, 00:12:22.430 "reset": true, 00:12:22.430 "nvme_admin": false, 00:12:22.430 "nvme_io": false, 00:12:22.430 "nvme_io_md": false, 00:12:22.430 "write_zeroes": true, 00:12:22.430 "zcopy": true, 00:12:22.430 "get_zone_info": false, 00:12:22.430 "zone_management": false, 00:12:22.430 "zone_append": false, 00:12:22.430 "compare": false, 00:12:22.430 "compare_and_write": false, 00:12:22.430 "abort": true, 00:12:22.430 "seek_hole": false, 00:12:22.430 "seek_data": false, 00:12:22.430 "copy": true, 00:12:22.430 "nvme_iov_md": false 00:12:22.430 }, 00:12:22.430 "memory_domains": [ 00:12:22.430 { 00:12:22.430 "dma_device_id": "system", 00:12:22.430 "dma_device_type": 1 00:12:22.430 }, 00:12:22.430 { 00:12:22.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.430 "dma_device_type": 2 00:12:22.430 } 00:12:22.430 ], 00:12:22.430 "driver_specific": {} 00:12:22.430 } 00:12:22.430 ] 00:12:22.430 16:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.430 16:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:22.430 16:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:22.430 16:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:22.430 16:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:22.430 16:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:22.430 16:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:22.430 16:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:22.430 16:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:22.430 16:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:22.430 16:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.430 16:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.430 16:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.430 16:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.430 16:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.430 16:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.430 16:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.430 16:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.430 16:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.430 16:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.430 "name": "Existed_Raid", 00:12:22.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.430 "strip_size_kb": 64, 00:12:22.430 "state": "configuring", 00:12:22.430 "raid_level": "raid0", 00:12:22.430 "superblock": false, 00:12:22.430 "num_base_bdevs": 4, 00:12:22.430 "num_base_bdevs_discovered": 3, 00:12:22.430 "num_base_bdevs_operational": 4, 00:12:22.430 "base_bdevs_list": [ 00:12:22.430 { 00:12:22.430 "name": "BaseBdev1", 00:12:22.430 "uuid": "68c68a10-873e-4e1a-a832-ec130d8d629e", 00:12:22.430 "is_configured": true, 00:12:22.430 "data_offset": 0, 00:12:22.430 "data_size": 65536 00:12:22.430 }, 00:12:22.430 { 00:12:22.430 "name": "BaseBdev2", 00:12:22.430 "uuid": "c2e02982-8a38-4da0-8c16-5f09c687c211", 00:12:22.430 "is_configured": true, 00:12:22.430 "data_offset": 0, 00:12:22.430 "data_size": 65536 00:12:22.430 }, 00:12:22.430 { 00:12:22.430 "name": "BaseBdev3", 00:12:22.430 "uuid": "3279881d-6128-4a01-a251-d6aa32811f24", 00:12:22.430 "is_configured": true, 00:12:22.430 "data_offset": 0, 00:12:22.430 "data_size": 65536 00:12:22.430 }, 00:12:22.430 { 00:12:22.430 "name": "BaseBdev4", 00:12:22.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.430 "is_configured": false, 00:12:22.430 "data_offset": 0, 00:12:22.430 "data_size": 0 00:12:22.430 } 00:12:22.430 ] 00:12:22.430 }' 00:12:22.430 16:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.431 16:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.690 16:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:22.690 16:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.690 16:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.690 [2024-12-06 16:28:04.516021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:22.690 [2024-12-06 16:28:04.516148] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:12:22.690 [2024-12-06 16:28:04.516163] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:22.690 [2024-12-06 16:28:04.516505] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:22.690 [2024-12-06 16:28:04.516649] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:12:22.690 [2024-12-06 16:28:04.516662] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:12:22.690 [2024-12-06 16:28:04.516869] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:22.690 BaseBdev4 00:12:22.690 16:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.690 16:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:22.690 16:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:22.690 16:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:22.690 16:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:22.690 16:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:22.690 16:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:22.690 16:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:22.690 16:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.690 16:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.950 16:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.950 16:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:22.950 16:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.950 16:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.950 [ 00:12:22.950 { 00:12:22.950 "name": "BaseBdev4", 00:12:22.950 "aliases": [ 00:12:22.950 "4cfe5041-70c8-41f2-9f22-a600257ec45e" 00:12:22.950 ], 00:12:22.950 "product_name": "Malloc disk", 00:12:22.950 "block_size": 512, 00:12:22.950 "num_blocks": 65536, 00:12:22.950 "uuid": "4cfe5041-70c8-41f2-9f22-a600257ec45e", 00:12:22.950 "assigned_rate_limits": { 00:12:22.950 "rw_ios_per_sec": 0, 00:12:22.950 "rw_mbytes_per_sec": 0, 00:12:22.950 "r_mbytes_per_sec": 0, 00:12:22.950 "w_mbytes_per_sec": 0 00:12:22.950 }, 00:12:22.950 "claimed": true, 00:12:22.950 "claim_type": "exclusive_write", 00:12:22.950 "zoned": false, 00:12:22.950 "supported_io_types": { 00:12:22.950 "read": true, 00:12:22.950 "write": true, 00:12:22.950 "unmap": true, 00:12:22.950 "flush": true, 00:12:22.950 "reset": true, 00:12:22.950 "nvme_admin": false, 00:12:22.950 "nvme_io": false, 00:12:22.950 "nvme_io_md": false, 00:12:22.950 "write_zeroes": true, 00:12:22.950 "zcopy": true, 00:12:22.950 "get_zone_info": false, 00:12:22.950 "zone_management": false, 00:12:22.950 "zone_append": false, 00:12:22.950 "compare": false, 00:12:22.950 "compare_and_write": false, 00:12:22.950 "abort": true, 00:12:22.950 "seek_hole": false, 00:12:22.950 "seek_data": false, 00:12:22.950 "copy": true, 00:12:22.950 "nvme_iov_md": false 00:12:22.950 }, 00:12:22.950 "memory_domains": [ 00:12:22.950 { 00:12:22.950 "dma_device_id": "system", 00:12:22.950 "dma_device_type": 1 00:12:22.950 }, 00:12:22.950 { 00:12:22.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.950 "dma_device_type": 2 00:12:22.950 } 00:12:22.950 ], 00:12:22.950 "driver_specific": {} 00:12:22.950 } 00:12:22.950 ] 00:12:22.950 16:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.950 16:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:22.950 16:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:22.950 16:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:22.950 16:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:22.950 16:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:22.950 16:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:22.950 16:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:22.950 16:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:22.950 16:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:22.950 16:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.950 16:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.950 16:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.950 16:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.950 16:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.950 16:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.950 16:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.950 16:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.950 16:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.950 16:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.950 "name": "Existed_Raid", 00:12:22.950 "uuid": "602d4271-76b1-4162-8dd0-f8b972578038", 00:12:22.950 "strip_size_kb": 64, 00:12:22.950 "state": "online", 00:12:22.950 "raid_level": "raid0", 00:12:22.950 "superblock": false, 00:12:22.950 "num_base_bdevs": 4, 00:12:22.950 "num_base_bdevs_discovered": 4, 00:12:22.950 "num_base_bdevs_operational": 4, 00:12:22.950 "base_bdevs_list": [ 00:12:22.950 { 00:12:22.950 "name": "BaseBdev1", 00:12:22.950 "uuid": "68c68a10-873e-4e1a-a832-ec130d8d629e", 00:12:22.950 "is_configured": true, 00:12:22.950 "data_offset": 0, 00:12:22.950 "data_size": 65536 00:12:22.951 }, 00:12:22.951 { 00:12:22.951 "name": "BaseBdev2", 00:12:22.951 "uuid": "c2e02982-8a38-4da0-8c16-5f09c687c211", 00:12:22.951 "is_configured": true, 00:12:22.951 "data_offset": 0, 00:12:22.951 "data_size": 65536 00:12:22.951 }, 00:12:22.951 { 00:12:22.951 "name": "BaseBdev3", 00:12:22.951 "uuid": "3279881d-6128-4a01-a251-d6aa32811f24", 00:12:22.951 "is_configured": true, 00:12:22.951 "data_offset": 0, 00:12:22.951 "data_size": 65536 00:12:22.951 }, 00:12:22.951 { 00:12:22.951 "name": "BaseBdev4", 00:12:22.951 "uuid": "4cfe5041-70c8-41f2-9f22-a600257ec45e", 00:12:22.951 "is_configured": true, 00:12:22.951 "data_offset": 0, 00:12:22.951 "data_size": 65536 00:12:22.951 } 00:12:22.951 ] 00:12:22.951 }' 00:12:22.951 16:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.951 16:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.211 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:23.211 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:23.211 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:23.211 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:23.211 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:23.211 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:23.211 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:23.211 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:23.211 16:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.211 16:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.211 [2024-12-06 16:28:05.027609] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:23.471 16:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.471 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:23.471 "name": "Existed_Raid", 00:12:23.471 "aliases": [ 00:12:23.471 "602d4271-76b1-4162-8dd0-f8b972578038" 00:12:23.471 ], 00:12:23.471 "product_name": "Raid Volume", 00:12:23.471 "block_size": 512, 00:12:23.471 "num_blocks": 262144, 00:12:23.471 "uuid": "602d4271-76b1-4162-8dd0-f8b972578038", 00:12:23.471 "assigned_rate_limits": { 00:12:23.471 "rw_ios_per_sec": 0, 00:12:23.471 "rw_mbytes_per_sec": 0, 00:12:23.471 "r_mbytes_per_sec": 0, 00:12:23.471 "w_mbytes_per_sec": 0 00:12:23.471 }, 00:12:23.471 "claimed": false, 00:12:23.471 "zoned": false, 00:12:23.471 "supported_io_types": { 00:12:23.471 "read": true, 00:12:23.471 "write": true, 00:12:23.471 "unmap": true, 00:12:23.471 "flush": true, 00:12:23.471 "reset": true, 00:12:23.471 "nvme_admin": false, 00:12:23.471 "nvme_io": false, 00:12:23.471 "nvme_io_md": false, 00:12:23.471 "write_zeroes": true, 00:12:23.471 "zcopy": false, 00:12:23.471 "get_zone_info": false, 00:12:23.471 "zone_management": false, 00:12:23.471 "zone_append": false, 00:12:23.471 "compare": false, 00:12:23.471 "compare_and_write": false, 00:12:23.471 "abort": false, 00:12:23.471 "seek_hole": false, 00:12:23.471 "seek_data": false, 00:12:23.471 "copy": false, 00:12:23.471 "nvme_iov_md": false 00:12:23.471 }, 00:12:23.471 "memory_domains": [ 00:12:23.471 { 00:12:23.471 "dma_device_id": "system", 00:12:23.471 "dma_device_type": 1 00:12:23.471 }, 00:12:23.471 { 00:12:23.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.471 "dma_device_type": 2 00:12:23.471 }, 00:12:23.471 { 00:12:23.471 "dma_device_id": "system", 00:12:23.471 "dma_device_type": 1 00:12:23.471 }, 00:12:23.471 { 00:12:23.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.471 "dma_device_type": 2 00:12:23.471 }, 00:12:23.471 { 00:12:23.471 "dma_device_id": "system", 00:12:23.471 "dma_device_type": 1 00:12:23.471 }, 00:12:23.471 { 00:12:23.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.471 "dma_device_type": 2 00:12:23.471 }, 00:12:23.471 { 00:12:23.471 "dma_device_id": "system", 00:12:23.471 "dma_device_type": 1 00:12:23.471 }, 00:12:23.471 { 00:12:23.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.471 "dma_device_type": 2 00:12:23.471 } 00:12:23.471 ], 00:12:23.471 "driver_specific": { 00:12:23.471 "raid": { 00:12:23.471 "uuid": "602d4271-76b1-4162-8dd0-f8b972578038", 00:12:23.471 "strip_size_kb": 64, 00:12:23.471 "state": "online", 00:12:23.471 "raid_level": "raid0", 00:12:23.471 "superblock": false, 00:12:23.471 "num_base_bdevs": 4, 00:12:23.471 "num_base_bdevs_discovered": 4, 00:12:23.471 "num_base_bdevs_operational": 4, 00:12:23.471 "base_bdevs_list": [ 00:12:23.471 { 00:12:23.471 "name": "BaseBdev1", 00:12:23.471 "uuid": "68c68a10-873e-4e1a-a832-ec130d8d629e", 00:12:23.471 "is_configured": true, 00:12:23.471 "data_offset": 0, 00:12:23.471 "data_size": 65536 00:12:23.471 }, 00:12:23.471 { 00:12:23.471 "name": "BaseBdev2", 00:12:23.471 "uuid": "c2e02982-8a38-4da0-8c16-5f09c687c211", 00:12:23.471 "is_configured": true, 00:12:23.471 "data_offset": 0, 00:12:23.471 "data_size": 65536 00:12:23.471 }, 00:12:23.471 { 00:12:23.472 "name": "BaseBdev3", 00:12:23.472 "uuid": "3279881d-6128-4a01-a251-d6aa32811f24", 00:12:23.472 "is_configured": true, 00:12:23.472 "data_offset": 0, 00:12:23.472 "data_size": 65536 00:12:23.472 }, 00:12:23.472 { 00:12:23.472 "name": "BaseBdev4", 00:12:23.472 "uuid": "4cfe5041-70c8-41f2-9f22-a600257ec45e", 00:12:23.472 "is_configured": true, 00:12:23.472 "data_offset": 0, 00:12:23.472 "data_size": 65536 00:12:23.472 } 00:12:23.472 ] 00:12:23.472 } 00:12:23.472 } 00:12:23.472 }' 00:12:23.472 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:23.472 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:23.472 BaseBdev2 00:12:23.472 BaseBdev3 00:12:23.472 BaseBdev4' 00:12:23.472 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.472 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:23.472 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:23.472 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:23.472 16:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.472 16:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.472 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.472 16:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.472 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:23.472 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:23.472 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:23.472 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:23.472 16:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.472 16:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.472 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.472 16:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.472 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:23.472 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:23.472 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:23.472 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:23.472 16:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.472 16:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.472 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.472 16:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.731 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:23.731 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:23.731 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:23.731 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.731 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:23.731 16:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.731 16:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.731 16:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.731 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:23.731 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:23.731 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:23.731 16:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.731 16:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.731 [2024-12-06 16:28:05.378698] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:23.731 [2024-12-06 16:28:05.378773] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:23.731 [2024-12-06 16:28:05.378847] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:23.731 16:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.731 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:23.731 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:12:23.731 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:23.731 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:23.731 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:23.731 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:12:23.731 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:23.731 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:23.731 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:23.731 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:23.731 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:23.731 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.731 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.731 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.731 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.731 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.731 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:23.731 16:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.731 16:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.731 16:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.731 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.731 "name": "Existed_Raid", 00:12:23.731 "uuid": "602d4271-76b1-4162-8dd0-f8b972578038", 00:12:23.731 "strip_size_kb": 64, 00:12:23.731 "state": "offline", 00:12:23.731 "raid_level": "raid0", 00:12:23.731 "superblock": false, 00:12:23.731 "num_base_bdevs": 4, 00:12:23.731 "num_base_bdevs_discovered": 3, 00:12:23.731 "num_base_bdevs_operational": 3, 00:12:23.731 "base_bdevs_list": [ 00:12:23.731 { 00:12:23.731 "name": null, 00:12:23.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.731 "is_configured": false, 00:12:23.731 "data_offset": 0, 00:12:23.731 "data_size": 65536 00:12:23.731 }, 00:12:23.731 { 00:12:23.731 "name": "BaseBdev2", 00:12:23.731 "uuid": "c2e02982-8a38-4da0-8c16-5f09c687c211", 00:12:23.731 "is_configured": true, 00:12:23.731 "data_offset": 0, 00:12:23.731 "data_size": 65536 00:12:23.731 }, 00:12:23.731 { 00:12:23.731 "name": "BaseBdev3", 00:12:23.731 "uuid": "3279881d-6128-4a01-a251-d6aa32811f24", 00:12:23.731 "is_configured": true, 00:12:23.731 "data_offset": 0, 00:12:23.731 "data_size": 65536 00:12:23.731 }, 00:12:23.731 { 00:12:23.731 "name": "BaseBdev4", 00:12:23.731 "uuid": "4cfe5041-70c8-41f2-9f22-a600257ec45e", 00:12:23.731 "is_configured": true, 00:12:23.731 "data_offset": 0, 00:12:23.731 "data_size": 65536 00:12:23.731 } 00:12:23.731 ] 00:12:23.731 }' 00:12:23.731 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.731 16:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.300 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:24.300 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:24.300 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.300 16:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.300 16:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.300 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:24.300 16:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.300 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:24.300 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:24.300 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:24.300 16:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.300 16:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.300 [2024-12-06 16:28:05.889334] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:24.300 16:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.300 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:24.300 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:24.300 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:24.300 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.300 16:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.300 16:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.300 16:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.300 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:24.300 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:24.300 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:24.300 16:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.300 16:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.300 [2024-12-06 16:28:05.956703] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:24.300 16:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.300 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:24.300 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:24.300 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.300 16:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.301 16:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:24.301 16:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.301 16:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.301 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:24.301 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:24.301 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:24.301 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.301 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.301 [2024-12-06 16:28:06.023884] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:24.301 [2024-12-06 16:28:06.023988] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:12:24.301 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.301 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:24.301 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:24.301 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.301 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:24.301 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.301 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.301 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.301 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:24.301 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:24.301 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:24.301 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:24.301 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:24.301 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:24.301 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.301 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.301 BaseBdev2 00:12:24.301 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.301 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:24.301 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:24.301 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:24.301 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:24.301 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:24.301 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:24.301 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:24.301 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.301 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.301 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.301 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:24.301 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.301 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.301 [ 00:12:24.301 { 00:12:24.301 "name": "BaseBdev2", 00:12:24.301 "aliases": [ 00:12:24.301 "aee20940-8154-4b29-8db2-8e0a6d42e784" 00:12:24.301 ], 00:12:24.301 "product_name": "Malloc disk", 00:12:24.301 "block_size": 512, 00:12:24.301 "num_blocks": 65536, 00:12:24.301 "uuid": "aee20940-8154-4b29-8db2-8e0a6d42e784", 00:12:24.301 "assigned_rate_limits": { 00:12:24.301 "rw_ios_per_sec": 0, 00:12:24.301 "rw_mbytes_per_sec": 0, 00:12:24.301 "r_mbytes_per_sec": 0, 00:12:24.301 "w_mbytes_per_sec": 0 00:12:24.301 }, 00:12:24.301 "claimed": false, 00:12:24.301 "zoned": false, 00:12:24.301 "supported_io_types": { 00:12:24.301 "read": true, 00:12:24.301 "write": true, 00:12:24.301 "unmap": true, 00:12:24.301 "flush": true, 00:12:24.301 "reset": true, 00:12:24.301 "nvme_admin": false, 00:12:24.301 "nvme_io": false, 00:12:24.301 "nvme_io_md": false, 00:12:24.301 "write_zeroes": true, 00:12:24.301 "zcopy": true, 00:12:24.301 "get_zone_info": false, 00:12:24.301 "zone_management": false, 00:12:24.301 "zone_append": false, 00:12:24.301 "compare": false, 00:12:24.561 "compare_and_write": false, 00:12:24.561 "abort": true, 00:12:24.561 "seek_hole": false, 00:12:24.561 "seek_data": false, 00:12:24.561 "copy": true, 00:12:24.561 "nvme_iov_md": false 00:12:24.561 }, 00:12:24.561 "memory_domains": [ 00:12:24.561 { 00:12:24.561 "dma_device_id": "system", 00:12:24.561 "dma_device_type": 1 00:12:24.561 }, 00:12:24.561 { 00:12:24.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.561 "dma_device_type": 2 00:12:24.561 } 00:12:24.561 ], 00:12:24.561 "driver_specific": {} 00:12:24.561 } 00:12:24.561 ] 00:12:24.561 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.561 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:24.561 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:24.561 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:24.561 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:24.561 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.561 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.561 BaseBdev3 00:12:24.561 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.561 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.562 [ 00:12:24.562 { 00:12:24.562 "name": "BaseBdev3", 00:12:24.562 "aliases": [ 00:12:24.562 "f8f04388-e4c1-42c7-9e9d-25c0a4923365" 00:12:24.562 ], 00:12:24.562 "product_name": "Malloc disk", 00:12:24.562 "block_size": 512, 00:12:24.562 "num_blocks": 65536, 00:12:24.562 "uuid": "f8f04388-e4c1-42c7-9e9d-25c0a4923365", 00:12:24.562 "assigned_rate_limits": { 00:12:24.562 "rw_ios_per_sec": 0, 00:12:24.562 "rw_mbytes_per_sec": 0, 00:12:24.562 "r_mbytes_per_sec": 0, 00:12:24.562 "w_mbytes_per_sec": 0 00:12:24.562 }, 00:12:24.562 "claimed": false, 00:12:24.562 "zoned": false, 00:12:24.562 "supported_io_types": { 00:12:24.562 "read": true, 00:12:24.562 "write": true, 00:12:24.562 "unmap": true, 00:12:24.562 "flush": true, 00:12:24.562 "reset": true, 00:12:24.562 "nvme_admin": false, 00:12:24.562 "nvme_io": false, 00:12:24.562 "nvme_io_md": false, 00:12:24.562 "write_zeroes": true, 00:12:24.562 "zcopy": true, 00:12:24.562 "get_zone_info": false, 00:12:24.562 "zone_management": false, 00:12:24.562 "zone_append": false, 00:12:24.562 "compare": false, 00:12:24.562 "compare_and_write": false, 00:12:24.562 "abort": true, 00:12:24.562 "seek_hole": false, 00:12:24.562 "seek_data": false, 00:12:24.562 "copy": true, 00:12:24.562 "nvme_iov_md": false 00:12:24.562 }, 00:12:24.562 "memory_domains": [ 00:12:24.562 { 00:12:24.562 "dma_device_id": "system", 00:12:24.562 "dma_device_type": 1 00:12:24.562 }, 00:12:24.562 { 00:12:24.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.562 "dma_device_type": 2 00:12:24.562 } 00:12:24.562 ], 00:12:24.562 "driver_specific": {} 00:12:24.562 } 00:12:24.562 ] 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.562 BaseBdev4 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.562 [ 00:12:24.562 { 00:12:24.562 "name": "BaseBdev4", 00:12:24.562 "aliases": [ 00:12:24.562 "545fd12a-7930-4b69-9f78-5496a43c27e1" 00:12:24.562 ], 00:12:24.562 "product_name": "Malloc disk", 00:12:24.562 "block_size": 512, 00:12:24.562 "num_blocks": 65536, 00:12:24.562 "uuid": "545fd12a-7930-4b69-9f78-5496a43c27e1", 00:12:24.562 "assigned_rate_limits": { 00:12:24.562 "rw_ios_per_sec": 0, 00:12:24.562 "rw_mbytes_per_sec": 0, 00:12:24.562 "r_mbytes_per_sec": 0, 00:12:24.562 "w_mbytes_per_sec": 0 00:12:24.562 }, 00:12:24.562 "claimed": false, 00:12:24.562 "zoned": false, 00:12:24.562 "supported_io_types": { 00:12:24.562 "read": true, 00:12:24.562 "write": true, 00:12:24.562 "unmap": true, 00:12:24.562 "flush": true, 00:12:24.562 "reset": true, 00:12:24.562 "nvme_admin": false, 00:12:24.562 "nvme_io": false, 00:12:24.562 "nvme_io_md": false, 00:12:24.562 "write_zeroes": true, 00:12:24.562 "zcopy": true, 00:12:24.562 "get_zone_info": false, 00:12:24.562 "zone_management": false, 00:12:24.562 "zone_append": false, 00:12:24.562 "compare": false, 00:12:24.562 "compare_and_write": false, 00:12:24.562 "abort": true, 00:12:24.562 "seek_hole": false, 00:12:24.562 "seek_data": false, 00:12:24.562 "copy": true, 00:12:24.562 "nvme_iov_md": false 00:12:24.562 }, 00:12:24.562 "memory_domains": [ 00:12:24.562 { 00:12:24.562 "dma_device_id": "system", 00:12:24.562 "dma_device_type": 1 00:12:24.562 }, 00:12:24.562 { 00:12:24.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.562 "dma_device_type": 2 00:12:24.562 } 00:12:24.562 ], 00:12:24.562 "driver_specific": {} 00:12:24.562 } 00:12:24.562 ] 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.562 [2024-12-06 16:28:06.253101] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:24.562 [2024-12-06 16:28:06.253151] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:24.562 [2024-12-06 16:28:06.253177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:24.562 [2024-12-06 16:28:06.255085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:24.562 [2024-12-06 16:28:06.255134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.562 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.562 "name": "Existed_Raid", 00:12:24.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.562 "strip_size_kb": 64, 00:12:24.562 "state": "configuring", 00:12:24.562 "raid_level": "raid0", 00:12:24.562 "superblock": false, 00:12:24.562 "num_base_bdevs": 4, 00:12:24.562 "num_base_bdevs_discovered": 3, 00:12:24.562 "num_base_bdevs_operational": 4, 00:12:24.562 "base_bdevs_list": [ 00:12:24.562 { 00:12:24.562 "name": "BaseBdev1", 00:12:24.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.562 "is_configured": false, 00:12:24.562 "data_offset": 0, 00:12:24.563 "data_size": 0 00:12:24.563 }, 00:12:24.563 { 00:12:24.563 "name": "BaseBdev2", 00:12:24.563 "uuid": "aee20940-8154-4b29-8db2-8e0a6d42e784", 00:12:24.563 "is_configured": true, 00:12:24.563 "data_offset": 0, 00:12:24.563 "data_size": 65536 00:12:24.563 }, 00:12:24.563 { 00:12:24.563 "name": "BaseBdev3", 00:12:24.563 "uuid": "f8f04388-e4c1-42c7-9e9d-25c0a4923365", 00:12:24.563 "is_configured": true, 00:12:24.563 "data_offset": 0, 00:12:24.563 "data_size": 65536 00:12:24.563 }, 00:12:24.563 { 00:12:24.563 "name": "BaseBdev4", 00:12:24.563 "uuid": "545fd12a-7930-4b69-9f78-5496a43c27e1", 00:12:24.563 "is_configured": true, 00:12:24.563 "data_offset": 0, 00:12:24.563 "data_size": 65536 00:12:24.563 } 00:12:24.563 ] 00:12:24.563 }' 00:12:24.563 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.563 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.132 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:25.132 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.132 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.132 [2024-12-06 16:28:06.684387] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:25.132 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.132 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:25.132 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:25.132 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:25.132 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:25.132 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:25.132 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:25.132 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.132 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.132 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.132 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.132 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.132 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:25.132 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.132 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.132 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.132 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.132 "name": "Existed_Raid", 00:12:25.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.132 "strip_size_kb": 64, 00:12:25.132 "state": "configuring", 00:12:25.132 "raid_level": "raid0", 00:12:25.132 "superblock": false, 00:12:25.132 "num_base_bdevs": 4, 00:12:25.132 "num_base_bdevs_discovered": 2, 00:12:25.132 "num_base_bdevs_operational": 4, 00:12:25.132 "base_bdevs_list": [ 00:12:25.132 { 00:12:25.132 "name": "BaseBdev1", 00:12:25.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.132 "is_configured": false, 00:12:25.132 "data_offset": 0, 00:12:25.132 "data_size": 0 00:12:25.132 }, 00:12:25.132 { 00:12:25.132 "name": null, 00:12:25.132 "uuid": "aee20940-8154-4b29-8db2-8e0a6d42e784", 00:12:25.132 "is_configured": false, 00:12:25.132 "data_offset": 0, 00:12:25.132 "data_size": 65536 00:12:25.132 }, 00:12:25.132 { 00:12:25.132 "name": "BaseBdev3", 00:12:25.132 "uuid": "f8f04388-e4c1-42c7-9e9d-25c0a4923365", 00:12:25.132 "is_configured": true, 00:12:25.132 "data_offset": 0, 00:12:25.132 "data_size": 65536 00:12:25.132 }, 00:12:25.132 { 00:12:25.132 "name": "BaseBdev4", 00:12:25.132 "uuid": "545fd12a-7930-4b69-9f78-5496a43c27e1", 00:12:25.132 "is_configured": true, 00:12:25.132 "data_offset": 0, 00:12:25.132 "data_size": 65536 00:12:25.132 } 00:12:25.132 ] 00:12:25.132 }' 00:12:25.132 16:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.132 16:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.393 16:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.393 16:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:25.393 16:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.393 16:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.393 16:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.393 16:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:25.393 16:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:25.393 16:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.393 16:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.393 [2024-12-06 16:28:07.218695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:25.393 BaseBdev1 00:12:25.393 16:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.393 16:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:25.393 16:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:25.393 16:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:25.393 16:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:25.393 16:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:25.393 16:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:25.393 16:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:25.393 16:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.393 16:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.653 16:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.653 16:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:25.653 16:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.653 16:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.653 [ 00:12:25.653 { 00:12:25.653 "name": "BaseBdev1", 00:12:25.653 "aliases": [ 00:12:25.653 "032c1e09-ccd2-40ae-8456-5d1604cdee05" 00:12:25.653 ], 00:12:25.653 "product_name": "Malloc disk", 00:12:25.653 "block_size": 512, 00:12:25.653 "num_blocks": 65536, 00:12:25.653 "uuid": "032c1e09-ccd2-40ae-8456-5d1604cdee05", 00:12:25.653 "assigned_rate_limits": { 00:12:25.653 "rw_ios_per_sec": 0, 00:12:25.653 "rw_mbytes_per_sec": 0, 00:12:25.653 "r_mbytes_per_sec": 0, 00:12:25.653 "w_mbytes_per_sec": 0 00:12:25.653 }, 00:12:25.653 "claimed": true, 00:12:25.653 "claim_type": "exclusive_write", 00:12:25.653 "zoned": false, 00:12:25.653 "supported_io_types": { 00:12:25.653 "read": true, 00:12:25.653 "write": true, 00:12:25.653 "unmap": true, 00:12:25.653 "flush": true, 00:12:25.653 "reset": true, 00:12:25.653 "nvme_admin": false, 00:12:25.653 "nvme_io": false, 00:12:25.653 "nvme_io_md": false, 00:12:25.653 "write_zeroes": true, 00:12:25.653 "zcopy": true, 00:12:25.653 "get_zone_info": false, 00:12:25.653 "zone_management": false, 00:12:25.653 "zone_append": false, 00:12:25.653 "compare": false, 00:12:25.653 "compare_and_write": false, 00:12:25.653 "abort": true, 00:12:25.653 "seek_hole": false, 00:12:25.653 "seek_data": false, 00:12:25.653 "copy": true, 00:12:25.653 "nvme_iov_md": false 00:12:25.653 }, 00:12:25.653 "memory_domains": [ 00:12:25.653 { 00:12:25.653 "dma_device_id": "system", 00:12:25.653 "dma_device_type": 1 00:12:25.653 }, 00:12:25.653 { 00:12:25.653 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.653 "dma_device_type": 2 00:12:25.653 } 00:12:25.653 ], 00:12:25.653 "driver_specific": {} 00:12:25.653 } 00:12:25.653 ] 00:12:25.653 16:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.653 16:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:25.653 16:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:25.653 16:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:25.653 16:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:25.653 16:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:25.653 16:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:25.653 16:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:25.653 16:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.653 16:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.653 16:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.653 16:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.653 16:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.653 16:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.653 16:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.653 16:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:25.653 16:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.653 16:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.653 "name": "Existed_Raid", 00:12:25.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.653 "strip_size_kb": 64, 00:12:25.653 "state": "configuring", 00:12:25.653 "raid_level": "raid0", 00:12:25.653 "superblock": false, 00:12:25.653 "num_base_bdevs": 4, 00:12:25.653 "num_base_bdevs_discovered": 3, 00:12:25.653 "num_base_bdevs_operational": 4, 00:12:25.653 "base_bdevs_list": [ 00:12:25.653 { 00:12:25.653 "name": "BaseBdev1", 00:12:25.653 "uuid": "032c1e09-ccd2-40ae-8456-5d1604cdee05", 00:12:25.653 "is_configured": true, 00:12:25.653 "data_offset": 0, 00:12:25.653 "data_size": 65536 00:12:25.653 }, 00:12:25.653 { 00:12:25.653 "name": null, 00:12:25.653 "uuid": "aee20940-8154-4b29-8db2-8e0a6d42e784", 00:12:25.653 "is_configured": false, 00:12:25.653 "data_offset": 0, 00:12:25.653 "data_size": 65536 00:12:25.653 }, 00:12:25.653 { 00:12:25.653 "name": "BaseBdev3", 00:12:25.653 "uuid": "f8f04388-e4c1-42c7-9e9d-25c0a4923365", 00:12:25.653 "is_configured": true, 00:12:25.653 "data_offset": 0, 00:12:25.653 "data_size": 65536 00:12:25.653 }, 00:12:25.653 { 00:12:25.653 "name": "BaseBdev4", 00:12:25.653 "uuid": "545fd12a-7930-4b69-9f78-5496a43c27e1", 00:12:25.653 "is_configured": true, 00:12:25.653 "data_offset": 0, 00:12:25.653 "data_size": 65536 00:12:25.653 } 00:12:25.653 ] 00:12:25.653 }' 00:12:25.653 16:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.653 16:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.913 16:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.913 16:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:25.913 16:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.913 16:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.913 16:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.173 16:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:26.173 16:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:26.173 16:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.173 16:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.173 [2024-12-06 16:28:07.761890] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:26.173 16:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.173 16:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:26.173 16:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:26.173 16:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:26.174 16:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:26.174 16:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:26.174 16:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:26.174 16:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.174 16:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.174 16:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.174 16:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.174 16:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:26.174 16:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.174 16:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.174 16:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.174 16:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.174 16:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.174 "name": "Existed_Raid", 00:12:26.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.174 "strip_size_kb": 64, 00:12:26.174 "state": "configuring", 00:12:26.174 "raid_level": "raid0", 00:12:26.174 "superblock": false, 00:12:26.174 "num_base_bdevs": 4, 00:12:26.174 "num_base_bdevs_discovered": 2, 00:12:26.174 "num_base_bdevs_operational": 4, 00:12:26.174 "base_bdevs_list": [ 00:12:26.174 { 00:12:26.174 "name": "BaseBdev1", 00:12:26.174 "uuid": "032c1e09-ccd2-40ae-8456-5d1604cdee05", 00:12:26.174 "is_configured": true, 00:12:26.174 "data_offset": 0, 00:12:26.174 "data_size": 65536 00:12:26.174 }, 00:12:26.174 { 00:12:26.174 "name": null, 00:12:26.174 "uuid": "aee20940-8154-4b29-8db2-8e0a6d42e784", 00:12:26.174 "is_configured": false, 00:12:26.174 "data_offset": 0, 00:12:26.174 "data_size": 65536 00:12:26.174 }, 00:12:26.174 { 00:12:26.174 "name": null, 00:12:26.174 "uuid": "f8f04388-e4c1-42c7-9e9d-25c0a4923365", 00:12:26.174 "is_configured": false, 00:12:26.174 "data_offset": 0, 00:12:26.174 "data_size": 65536 00:12:26.174 }, 00:12:26.174 { 00:12:26.174 "name": "BaseBdev4", 00:12:26.174 "uuid": "545fd12a-7930-4b69-9f78-5496a43c27e1", 00:12:26.174 "is_configured": true, 00:12:26.174 "data_offset": 0, 00:12:26.174 "data_size": 65536 00:12:26.174 } 00:12:26.174 ] 00:12:26.174 }' 00:12:26.174 16:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.174 16:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.433 16:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.433 16:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:26.433 16:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.433 16:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.433 16:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.433 16:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:26.433 16:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:26.433 16:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.433 16:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.433 [2024-12-06 16:28:08.269087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:26.692 16:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.692 16:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:26.692 16:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:26.692 16:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:26.692 16:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:26.692 16:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:26.692 16:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:26.692 16:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.692 16:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.692 16:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.692 16:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.692 16:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.692 16:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:26.692 16:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.692 16:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.692 16:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.692 16:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.692 "name": "Existed_Raid", 00:12:26.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.692 "strip_size_kb": 64, 00:12:26.692 "state": "configuring", 00:12:26.692 "raid_level": "raid0", 00:12:26.692 "superblock": false, 00:12:26.692 "num_base_bdevs": 4, 00:12:26.692 "num_base_bdevs_discovered": 3, 00:12:26.692 "num_base_bdevs_operational": 4, 00:12:26.692 "base_bdevs_list": [ 00:12:26.692 { 00:12:26.692 "name": "BaseBdev1", 00:12:26.692 "uuid": "032c1e09-ccd2-40ae-8456-5d1604cdee05", 00:12:26.692 "is_configured": true, 00:12:26.692 "data_offset": 0, 00:12:26.692 "data_size": 65536 00:12:26.692 }, 00:12:26.692 { 00:12:26.692 "name": null, 00:12:26.692 "uuid": "aee20940-8154-4b29-8db2-8e0a6d42e784", 00:12:26.692 "is_configured": false, 00:12:26.692 "data_offset": 0, 00:12:26.692 "data_size": 65536 00:12:26.692 }, 00:12:26.692 { 00:12:26.692 "name": "BaseBdev3", 00:12:26.692 "uuid": "f8f04388-e4c1-42c7-9e9d-25c0a4923365", 00:12:26.692 "is_configured": true, 00:12:26.692 "data_offset": 0, 00:12:26.692 "data_size": 65536 00:12:26.692 }, 00:12:26.692 { 00:12:26.692 "name": "BaseBdev4", 00:12:26.692 "uuid": "545fd12a-7930-4b69-9f78-5496a43c27e1", 00:12:26.692 "is_configured": true, 00:12:26.692 "data_offset": 0, 00:12:26.692 "data_size": 65536 00:12:26.692 } 00:12:26.692 ] 00:12:26.692 }' 00:12:26.692 16:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.692 16:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.951 16:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:26.951 16:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.951 16:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.951 16:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.951 16:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.951 16:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:26.951 16:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:26.951 16:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.951 16:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.951 [2024-12-06 16:28:08.736265] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:26.951 16:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.951 16:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:26.951 16:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:26.952 16:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:26.952 16:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:26.952 16:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:26.952 16:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:26.952 16:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.952 16:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.952 16:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.952 16:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.952 16:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:26.952 16:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.952 16:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.952 16:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.952 16:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.211 16:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.211 "name": "Existed_Raid", 00:12:27.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.211 "strip_size_kb": 64, 00:12:27.211 "state": "configuring", 00:12:27.211 "raid_level": "raid0", 00:12:27.211 "superblock": false, 00:12:27.211 "num_base_bdevs": 4, 00:12:27.211 "num_base_bdevs_discovered": 2, 00:12:27.211 "num_base_bdevs_operational": 4, 00:12:27.211 "base_bdevs_list": [ 00:12:27.211 { 00:12:27.211 "name": null, 00:12:27.211 "uuid": "032c1e09-ccd2-40ae-8456-5d1604cdee05", 00:12:27.211 "is_configured": false, 00:12:27.211 "data_offset": 0, 00:12:27.211 "data_size": 65536 00:12:27.211 }, 00:12:27.211 { 00:12:27.211 "name": null, 00:12:27.211 "uuid": "aee20940-8154-4b29-8db2-8e0a6d42e784", 00:12:27.211 "is_configured": false, 00:12:27.211 "data_offset": 0, 00:12:27.211 "data_size": 65536 00:12:27.211 }, 00:12:27.211 { 00:12:27.211 "name": "BaseBdev3", 00:12:27.211 "uuid": "f8f04388-e4c1-42c7-9e9d-25c0a4923365", 00:12:27.211 "is_configured": true, 00:12:27.211 "data_offset": 0, 00:12:27.211 "data_size": 65536 00:12:27.211 }, 00:12:27.211 { 00:12:27.211 "name": "BaseBdev4", 00:12:27.211 "uuid": "545fd12a-7930-4b69-9f78-5496a43c27e1", 00:12:27.211 "is_configured": true, 00:12:27.211 "data_offset": 0, 00:12:27.211 "data_size": 65536 00:12:27.211 } 00:12:27.211 ] 00:12:27.211 }' 00:12:27.211 16:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.211 16:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.471 16:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.471 16:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:27.471 16:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.471 16:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.471 16:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.471 16:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:27.471 16:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:27.471 16:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.471 16:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.471 [2024-12-06 16:28:09.262073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:27.471 16:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.471 16:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:27.471 16:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:27.471 16:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:27.471 16:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:27.471 16:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:27.471 16:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:27.471 16:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.471 16:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.471 16:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.471 16:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.471 16:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.471 16:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:27.471 16:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.471 16:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.471 16:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.730 16:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.730 "name": "Existed_Raid", 00:12:27.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.731 "strip_size_kb": 64, 00:12:27.731 "state": "configuring", 00:12:27.731 "raid_level": "raid0", 00:12:27.731 "superblock": false, 00:12:27.731 "num_base_bdevs": 4, 00:12:27.731 "num_base_bdevs_discovered": 3, 00:12:27.731 "num_base_bdevs_operational": 4, 00:12:27.731 "base_bdevs_list": [ 00:12:27.731 { 00:12:27.731 "name": null, 00:12:27.731 "uuid": "032c1e09-ccd2-40ae-8456-5d1604cdee05", 00:12:27.731 "is_configured": false, 00:12:27.731 "data_offset": 0, 00:12:27.731 "data_size": 65536 00:12:27.731 }, 00:12:27.731 { 00:12:27.731 "name": "BaseBdev2", 00:12:27.731 "uuid": "aee20940-8154-4b29-8db2-8e0a6d42e784", 00:12:27.731 "is_configured": true, 00:12:27.731 "data_offset": 0, 00:12:27.731 "data_size": 65536 00:12:27.731 }, 00:12:27.731 { 00:12:27.731 "name": "BaseBdev3", 00:12:27.731 "uuid": "f8f04388-e4c1-42c7-9e9d-25c0a4923365", 00:12:27.731 "is_configured": true, 00:12:27.731 "data_offset": 0, 00:12:27.731 "data_size": 65536 00:12:27.731 }, 00:12:27.731 { 00:12:27.731 "name": "BaseBdev4", 00:12:27.731 "uuid": "545fd12a-7930-4b69-9f78-5496a43c27e1", 00:12:27.731 "is_configured": true, 00:12:27.731 "data_offset": 0, 00:12:27.731 "data_size": 65536 00:12:27.731 } 00:12:27.731 ] 00:12:27.731 }' 00:12:27.731 16:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.731 16:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.989 16:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.989 16:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:27.989 16:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.989 16:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.989 16:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.989 16:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:27.989 16:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.989 16:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:27.989 16:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.989 16:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.247 16:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.247 16:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 032c1e09-ccd2-40ae-8456-5d1604cdee05 00:12:28.247 16:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.247 16:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.247 [2024-12-06 16:28:09.876140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:28.247 [2024-12-06 16:28:09.876309] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:12:28.247 [2024-12-06 16:28:09.876326] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:28.247 [2024-12-06 16:28:09.876645] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:28.247 [2024-12-06 16:28:09.876793] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:12:28.247 [2024-12-06 16:28:09.876805] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:12:28.247 [2024-12-06 16:28:09.877014] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:28.247 NewBaseBdev 00:12:28.247 16:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.247 16:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:28.247 16:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:28.247 16:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:28.247 16:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:28.247 16:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:28.247 16:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:28.247 16:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:28.247 16:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.247 16:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.247 16:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.248 16:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:28.248 16:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.248 16:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.248 [ 00:12:28.248 { 00:12:28.248 "name": "NewBaseBdev", 00:12:28.248 "aliases": [ 00:12:28.248 "032c1e09-ccd2-40ae-8456-5d1604cdee05" 00:12:28.248 ], 00:12:28.248 "product_name": "Malloc disk", 00:12:28.248 "block_size": 512, 00:12:28.248 "num_blocks": 65536, 00:12:28.248 "uuid": "032c1e09-ccd2-40ae-8456-5d1604cdee05", 00:12:28.248 "assigned_rate_limits": { 00:12:28.248 "rw_ios_per_sec": 0, 00:12:28.248 "rw_mbytes_per_sec": 0, 00:12:28.248 "r_mbytes_per_sec": 0, 00:12:28.248 "w_mbytes_per_sec": 0 00:12:28.248 }, 00:12:28.248 "claimed": true, 00:12:28.248 "claim_type": "exclusive_write", 00:12:28.248 "zoned": false, 00:12:28.248 "supported_io_types": { 00:12:28.248 "read": true, 00:12:28.248 "write": true, 00:12:28.248 "unmap": true, 00:12:28.248 "flush": true, 00:12:28.248 "reset": true, 00:12:28.248 "nvme_admin": false, 00:12:28.248 "nvme_io": false, 00:12:28.248 "nvme_io_md": false, 00:12:28.248 "write_zeroes": true, 00:12:28.248 "zcopy": true, 00:12:28.248 "get_zone_info": false, 00:12:28.248 "zone_management": false, 00:12:28.248 "zone_append": false, 00:12:28.248 "compare": false, 00:12:28.248 "compare_and_write": false, 00:12:28.248 "abort": true, 00:12:28.248 "seek_hole": false, 00:12:28.248 "seek_data": false, 00:12:28.248 "copy": true, 00:12:28.248 "nvme_iov_md": false 00:12:28.248 }, 00:12:28.248 "memory_domains": [ 00:12:28.248 { 00:12:28.248 "dma_device_id": "system", 00:12:28.248 "dma_device_type": 1 00:12:28.248 }, 00:12:28.248 { 00:12:28.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.248 "dma_device_type": 2 00:12:28.248 } 00:12:28.248 ], 00:12:28.248 "driver_specific": {} 00:12:28.248 } 00:12:28.248 ] 00:12:28.248 16:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.248 16:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:28.248 16:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:28.248 16:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:28.248 16:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:28.248 16:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:28.248 16:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:28.248 16:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:28.248 16:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.248 16:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.248 16:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.248 16:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.248 16:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.248 16:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.248 16:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.248 16:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:28.248 16:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.248 16:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.248 "name": "Existed_Raid", 00:12:28.248 "uuid": "13f5b342-c568-4fc7-88b5-5a496b417239", 00:12:28.248 "strip_size_kb": 64, 00:12:28.248 "state": "online", 00:12:28.248 "raid_level": "raid0", 00:12:28.248 "superblock": false, 00:12:28.248 "num_base_bdevs": 4, 00:12:28.248 "num_base_bdevs_discovered": 4, 00:12:28.248 "num_base_bdevs_operational": 4, 00:12:28.248 "base_bdevs_list": [ 00:12:28.248 { 00:12:28.248 "name": "NewBaseBdev", 00:12:28.248 "uuid": "032c1e09-ccd2-40ae-8456-5d1604cdee05", 00:12:28.248 "is_configured": true, 00:12:28.248 "data_offset": 0, 00:12:28.248 "data_size": 65536 00:12:28.248 }, 00:12:28.248 { 00:12:28.248 "name": "BaseBdev2", 00:12:28.248 "uuid": "aee20940-8154-4b29-8db2-8e0a6d42e784", 00:12:28.248 "is_configured": true, 00:12:28.248 "data_offset": 0, 00:12:28.248 "data_size": 65536 00:12:28.248 }, 00:12:28.248 { 00:12:28.248 "name": "BaseBdev3", 00:12:28.248 "uuid": "f8f04388-e4c1-42c7-9e9d-25c0a4923365", 00:12:28.248 "is_configured": true, 00:12:28.248 "data_offset": 0, 00:12:28.248 "data_size": 65536 00:12:28.248 }, 00:12:28.248 { 00:12:28.248 "name": "BaseBdev4", 00:12:28.248 "uuid": "545fd12a-7930-4b69-9f78-5496a43c27e1", 00:12:28.248 "is_configured": true, 00:12:28.248 "data_offset": 0, 00:12:28.248 "data_size": 65536 00:12:28.248 } 00:12:28.248 ] 00:12:28.248 }' 00:12:28.248 16:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.248 16:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.507 16:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:28.507 16:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:28.507 16:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:28.507 16:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:28.508 16:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:28.508 16:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:28.508 16:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:28.508 16:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.508 16:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.508 16:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:28.508 [2024-12-06 16:28:10.331870] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:28.766 16:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.766 16:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:28.766 "name": "Existed_Raid", 00:12:28.766 "aliases": [ 00:12:28.766 "13f5b342-c568-4fc7-88b5-5a496b417239" 00:12:28.766 ], 00:12:28.766 "product_name": "Raid Volume", 00:12:28.766 "block_size": 512, 00:12:28.766 "num_blocks": 262144, 00:12:28.766 "uuid": "13f5b342-c568-4fc7-88b5-5a496b417239", 00:12:28.766 "assigned_rate_limits": { 00:12:28.766 "rw_ios_per_sec": 0, 00:12:28.766 "rw_mbytes_per_sec": 0, 00:12:28.766 "r_mbytes_per_sec": 0, 00:12:28.766 "w_mbytes_per_sec": 0 00:12:28.766 }, 00:12:28.766 "claimed": false, 00:12:28.766 "zoned": false, 00:12:28.766 "supported_io_types": { 00:12:28.766 "read": true, 00:12:28.766 "write": true, 00:12:28.766 "unmap": true, 00:12:28.766 "flush": true, 00:12:28.766 "reset": true, 00:12:28.766 "nvme_admin": false, 00:12:28.766 "nvme_io": false, 00:12:28.766 "nvme_io_md": false, 00:12:28.766 "write_zeroes": true, 00:12:28.766 "zcopy": false, 00:12:28.766 "get_zone_info": false, 00:12:28.766 "zone_management": false, 00:12:28.766 "zone_append": false, 00:12:28.766 "compare": false, 00:12:28.766 "compare_and_write": false, 00:12:28.766 "abort": false, 00:12:28.766 "seek_hole": false, 00:12:28.766 "seek_data": false, 00:12:28.766 "copy": false, 00:12:28.766 "nvme_iov_md": false 00:12:28.766 }, 00:12:28.766 "memory_domains": [ 00:12:28.766 { 00:12:28.766 "dma_device_id": "system", 00:12:28.766 "dma_device_type": 1 00:12:28.766 }, 00:12:28.766 { 00:12:28.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.766 "dma_device_type": 2 00:12:28.766 }, 00:12:28.766 { 00:12:28.766 "dma_device_id": "system", 00:12:28.766 "dma_device_type": 1 00:12:28.766 }, 00:12:28.766 { 00:12:28.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.766 "dma_device_type": 2 00:12:28.766 }, 00:12:28.766 { 00:12:28.766 "dma_device_id": "system", 00:12:28.767 "dma_device_type": 1 00:12:28.767 }, 00:12:28.767 { 00:12:28.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.767 "dma_device_type": 2 00:12:28.767 }, 00:12:28.767 { 00:12:28.767 "dma_device_id": "system", 00:12:28.767 "dma_device_type": 1 00:12:28.767 }, 00:12:28.767 { 00:12:28.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.767 "dma_device_type": 2 00:12:28.767 } 00:12:28.767 ], 00:12:28.767 "driver_specific": { 00:12:28.767 "raid": { 00:12:28.767 "uuid": "13f5b342-c568-4fc7-88b5-5a496b417239", 00:12:28.767 "strip_size_kb": 64, 00:12:28.767 "state": "online", 00:12:28.767 "raid_level": "raid0", 00:12:28.767 "superblock": false, 00:12:28.767 "num_base_bdevs": 4, 00:12:28.767 "num_base_bdevs_discovered": 4, 00:12:28.767 "num_base_bdevs_operational": 4, 00:12:28.767 "base_bdevs_list": [ 00:12:28.767 { 00:12:28.767 "name": "NewBaseBdev", 00:12:28.767 "uuid": "032c1e09-ccd2-40ae-8456-5d1604cdee05", 00:12:28.767 "is_configured": true, 00:12:28.767 "data_offset": 0, 00:12:28.767 "data_size": 65536 00:12:28.767 }, 00:12:28.767 { 00:12:28.767 "name": "BaseBdev2", 00:12:28.767 "uuid": "aee20940-8154-4b29-8db2-8e0a6d42e784", 00:12:28.767 "is_configured": true, 00:12:28.767 "data_offset": 0, 00:12:28.767 "data_size": 65536 00:12:28.767 }, 00:12:28.767 { 00:12:28.767 "name": "BaseBdev3", 00:12:28.767 "uuid": "f8f04388-e4c1-42c7-9e9d-25c0a4923365", 00:12:28.767 "is_configured": true, 00:12:28.767 "data_offset": 0, 00:12:28.767 "data_size": 65536 00:12:28.767 }, 00:12:28.767 { 00:12:28.767 "name": "BaseBdev4", 00:12:28.767 "uuid": "545fd12a-7930-4b69-9f78-5496a43c27e1", 00:12:28.767 "is_configured": true, 00:12:28.767 "data_offset": 0, 00:12:28.767 "data_size": 65536 00:12:28.767 } 00:12:28.767 ] 00:12:28.767 } 00:12:28.767 } 00:12:28.767 }' 00:12:28.767 16:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:28.767 16:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:28.767 BaseBdev2 00:12:28.767 BaseBdev3 00:12:28.767 BaseBdev4' 00:12:28.767 16:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:28.767 16:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:28.767 16:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:28.767 16:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:28.767 16:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:28.767 16:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.767 16:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.767 16:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.767 16:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:28.767 16:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:28.767 16:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:28.767 16:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:28.767 16:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:28.767 16:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.767 16:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.767 16:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.767 16:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:28.767 16:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:28.767 16:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:28.767 16:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:28.767 16:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:28.767 16:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.767 16:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.767 16:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.027 16:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:29.027 16:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:29.027 16:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:29.027 16:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:29.027 16:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.027 16:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:29.027 16:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.027 16:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.027 16:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:29.027 16:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:29.027 16:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:29.027 16:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.027 16:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.027 [2024-12-06 16:28:10.662896] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:29.027 [2024-12-06 16:28:10.662986] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:29.027 [2024-12-06 16:28:10.663122] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:29.027 [2024-12-06 16:28:10.663239] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:29.027 [2024-12-06 16:28:10.663292] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:12:29.027 16:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.027 16:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80748 00:12:29.027 16:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80748 ']' 00:12:29.027 16:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80748 00:12:29.027 16:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:29.027 16:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:29.027 16:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80748 00:12:29.027 killing process with pid 80748 00:12:29.027 16:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:29.027 16:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:29.027 16:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80748' 00:12:29.027 16:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 80748 00:12:29.027 [2024-12-06 16:28:10.713085] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:29.027 16:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 80748 00:12:29.027 [2024-12-06 16:28:10.756522] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:29.288 16:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:29.288 00:12:29.288 real 0m9.961s 00:12:29.288 user 0m17.141s 00:12:29.288 sys 0m2.100s 00:12:29.288 16:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:29.288 ************************************ 00:12:29.288 END TEST raid_state_function_test 00:12:29.288 ************************************ 00:12:29.288 16:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.288 16:28:11 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:12:29.288 16:28:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:29.288 16:28:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:29.288 16:28:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:29.288 ************************************ 00:12:29.288 START TEST raid_state_function_test_sb 00:12:29.288 ************************************ 00:12:29.288 16:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:12:29.288 16:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:12:29.288 16:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:29.288 16:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:29.288 16:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:29.288 16:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:29.288 16:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:29.288 16:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:29.288 16:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:29.288 16:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:29.288 16:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:29.288 16:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:29.288 16:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:29.288 16:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:29.288 16:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:29.288 16:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:29.288 16:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:29.288 16:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:29.288 16:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:29.288 16:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:29.288 16:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:29.288 16:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:29.288 16:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:29.288 16:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:29.288 16:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:29.288 16:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:12:29.288 16:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:29.288 16:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:29.288 16:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:29.288 16:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:29.288 Process raid pid: 81403 00:12:29.288 16:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=81403 00:12:29.288 16:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:29.288 16:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81403' 00:12:29.288 16:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 81403 00:12:29.288 16:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 81403 ']' 00:12:29.288 16:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.288 16:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:29.288 16:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.289 16:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:29.289 16:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.548 [2024-12-06 16:28:11.139619] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:12:29.549 [2024-12-06 16:28:11.139753] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:29.549 [2024-12-06 16:28:11.312967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.549 [2024-12-06 16:28:11.339498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.549 [2024-12-06 16:28:11.381831] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:29.549 [2024-12-06 16:28:11.381866] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:30.488 16:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:30.488 16:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:30.488 16:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:30.488 16:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.488 16:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.488 [2024-12-06 16:28:12.013122] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:30.488 [2024-12-06 16:28:12.013191] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:30.488 [2024-12-06 16:28:12.013211] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:30.488 [2024-12-06 16:28:12.013222] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:30.488 [2024-12-06 16:28:12.013228] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:30.488 [2024-12-06 16:28:12.013239] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:30.488 [2024-12-06 16:28:12.013245] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:30.488 [2024-12-06 16:28:12.013254] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:30.488 16:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.488 16:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:30.488 16:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:30.488 16:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:30.488 16:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:30.488 16:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:30.488 16:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:30.488 16:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.488 16:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.488 16:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.488 16:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.488 16:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:30.488 16:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.488 16:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.488 16:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.488 16:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.488 16:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.488 "name": "Existed_Raid", 00:12:30.488 "uuid": "b34b8e34-a94c-4443-9d26-e8e4557df563", 00:12:30.488 "strip_size_kb": 64, 00:12:30.488 "state": "configuring", 00:12:30.488 "raid_level": "raid0", 00:12:30.488 "superblock": true, 00:12:30.488 "num_base_bdevs": 4, 00:12:30.488 "num_base_bdevs_discovered": 0, 00:12:30.488 "num_base_bdevs_operational": 4, 00:12:30.488 "base_bdevs_list": [ 00:12:30.488 { 00:12:30.488 "name": "BaseBdev1", 00:12:30.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.488 "is_configured": false, 00:12:30.488 "data_offset": 0, 00:12:30.488 "data_size": 0 00:12:30.488 }, 00:12:30.488 { 00:12:30.488 "name": "BaseBdev2", 00:12:30.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.488 "is_configured": false, 00:12:30.488 "data_offset": 0, 00:12:30.488 "data_size": 0 00:12:30.488 }, 00:12:30.488 { 00:12:30.488 "name": "BaseBdev3", 00:12:30.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.488 "is_configured": false, 00:12:30.488 "data_offset": 0, 00:12:30.488 "data_size": 0 00:12:30.488 }, 00:12:30.488 { 00:12:30.488 "name": "BaseBdev4", 00:12:30.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.488 "is_configured": false, 00:12:30.488 "data_offset": 0, 00:12:30.488 "data_size": 0 00:12:30.488 } 00:12:30.488 ] 00:12:30.488 }' 00:12:30.488 16:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.488 16:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.748 16:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:30.748 16:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.748 16:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.748 [2024-12-06 16:28:12.396401] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:30.748 [2024-12-06 16:28:12.396497] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:12:30.748 16:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.748 16:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:30.748 16:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.748 16:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.748 [2024-12-06 16:28:12.408408] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:30.748 [2024-12-06 16:28:12.408503] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:30.748 [2024-12-06 16:28:12.408539] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:30.748 [2024-12-06 16:28:12.408582] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:30.748 [2024-12-06 16:28:12.408661] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:30.748 [2024-12-06 16:28:12.408710] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:30.748 [2024-12-06 16:28:12.408742] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:30.748 [2024-12-06 16:28:12.408775] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:30.748 16:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.748 16:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:30.748 16:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.748 16:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.748 [2024-12-06 16:28:12.429501] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:30.748 BaseBdev1 00:12:30.748 16:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.748 16:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:30.748 16:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:30.748 16:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:30.748 16:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:30.748 16:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:30.748 16:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:30.748 16:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:30.748 16:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.748 16:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.748 16:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.748 16:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:30.748 16:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.748 16:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.748 [ 00:12:30.748 { 00:12:30.748 "name": "BaseBdev1", 00:12:30.748 "aliases": [ 00:12:30.748 "238c423d-35e9-42c7-93b5-470d4201b43a" 00:12:30.748 ], 00:12:30.748 "product_name": "Malloc disk", 00:12:30.748 "block_size": 512, 00:12:30.748 "num_blocks": 65536, 00:12:30.748 "uuid": "238c423d-35e9-42c7-93b5-470d4201b43a", 00:12:30.748 "assigned_rate_limits": { 00:12:30.748 "rw_ios_per_sec": 0, 00:12:30.748 "rw_mbytes_per_sec": 0, 00:12:30.748 "r_mbytes_per_sec": 0, 00:12:30.748 "w_mbytes_per_sec": 0 00:12:30.748 }, 00:12:30.748 "claimed": true, 00:12:30.748 "claim_type": "exclusive_write", 00:12:30.748 "zoned": false, 00:12:30.748 "supported_io_types": { 00:12:30.748 "read": true, 00:12:30.748 "write": true, 00:12:30.748 "unmap": true, 00:12:30.748 "flush": true, 00:12:30.748 "reset": true, 00:12:30.748 "nvme_admin": false, 00:12:30.748 "nvme_io": false, 00:12:30.748 "nvme_io_md": false, 00:12:30.748 "write_zeroes": true, 00:12:30.748 "zcopy": true, 00:12:30.748 "get_zone_info": false, 00:12:30.748 "zone_management": false, 00:12:30.748 "zone_append": false, 00:12:30.748 "compare": false, 00:12:30.748 "compare_and_write": false, 00:12:30.748 "abort": true, 00:12:30.748 "seek_hole": false, 00:12:30.748 "seek_data": false, 00:12:30.748 "copy": true, 00:12:30.748 "nvme_iov_md": false 00:12:30.748 }, 00:12:30.748 "memory_domains": [ 00:12:30.748 { 00:12:30.748 "dma_device_id": "system", 00:12:30.748 "dma_device_type": 1 00:12:30.748 }, 00:12:30.748 { 00:12:30.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.748 "dma_device_type": 2 00:12:30.748 } 00:12:30.748 ], 00:12:30.748 "driver_specific": {} 00:12:30.748 } 00:12:30.748 ] 00:12:30.748 16:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.748 16:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:30.748 16:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:30.748 16:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:30.748 16:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:30.748 16:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:30.748 16:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:30.748 16:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:30.748 16:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.748 16:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.748 16:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.748 16:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.748 16:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.748 16:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:30.749 16:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.749 16:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.749 16:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.749 16:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.749 "name": "Existed_Raid", 00:12:30.749 "uuid": "1992c014-7227-4653-8c47-a740a5f01888", 00:12:30.749 "strip_size_kb": 64, 00:12:30.749 "state": "configuring", 00:12:30.749 "raid_level": "raid0", 00:12:30.749 "superblock": true, 00:12:30.749 "num_base_bdevs": 4, 00:12:30.749 "num_base_bdevs_discovered": 1, 00:12:30.749 "num_base_bdevs_operational": 4, 00:12:30.749 "base_bdevs_list": [ 00:12:30.749 { 00:12:30.749 "name": "BaseBdev1", 00:12:30.749 "uuid": "238c423d-35e9-42c7-93b5-470d4201b43a", 00:12:30.749 "is_configured": true, 00:12:30.749 "data_offset": 2048, 00:12:30.749 "data_size": 63488 00:12:30.749 }, 00:12:30.749 { 00:12:30.749 "name": "BaseBdev2", 00:12:30.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.749 "is_configured": false, 00:12:30.749 "data_offset": 0, 00:12:30.749 "data_size": 0 00:12:30.749 }, 00:12:30.749 { 00:12:30.749 "name": "BaseBdev3", 00:12:30.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.749 "is_configured": false, 00:12:30.749 "data_offset": 0, 00:12:30.749 "data_size": 0 00:12:30.749 }, 00:12:30.749 { 00:12:30.749 "name": "BaseBdev4", 00:12:30.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.749 "is_configured": false, 00:12:30.749 "data_offset": 0, 00:12:30.749 "data_size": 0 00:12:30.749 } 00:12:30.749 ] 00:12:30.749 }' 00:12:30.749 16:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.749 16:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.323 16:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:31.323 16:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.323 16:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.323 [2024-12-06 16:28:12.916759] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:31.323 [2024-12-06 16:28:12.916824] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:12:31.323 16:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.323 16:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:31.323 16:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.323 16:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.323 [2024-12-06 16:28:12.924802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:31.323 [2024-12-06 16:28:12.926925] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:31.324 [2024-12-06 16:28:12.927010] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:31.324 [2024-12-06 16:28:12.927024] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:31.324 [2024-12-06 16:28:12.927034] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:31.324 [2024-12-06 16:28:12.927040] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:31.324 [2024-12-06 16:28:12.927049] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:31.324 16:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.324 16:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:31.324 16:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:31.324 16:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:31.324 16:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:31.324 16:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:31.324 16:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:31.324 16:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:31.324 16:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:31.324 16:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.324 16:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.324 16:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.324 16:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.324 16:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.324 16:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:31.324 16:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.324 16:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.324 16:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.324 16:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.324 "name": "Existed_Raid", 00:12:31.324 "uuid": "369dfe9a-555c-4f33-8594-4b6d7bbeee3c", 00:12:31.324 "strip_size_kb": 64, 00:12:31.324 "state": "configuring", 00:12:31.324 "raid_level": "raid0", 00:12:31.324 "superblock": true, 00:12:31.324 "num_base_bdevs": 4, 00:12:31.324 "num_base_bdevs_discovered": 1, 00:12:31.324 "num_base_bdevs_operational": 4, 00:12:31.324 "base_bdevs_list": [ 00:12:31.324 { 00:12:31.324 "name": "BaseBdev1", 00:12:31.324 "uuid": "238c423d-35e9-42c7-93b5-470d4201b43a", 00:12:31.324 "is_configured": true, 00:12:31.324 "data_offset": 2048, 00:12:31.324 "data_size": 63488 00:12:31.324 }, 00:12:31.324 { 00:12:31.324 "name": "BaseBdev2", 00:12:31.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.324 "is_configured": false, 00:12:31.324 "data_offset": 0, 00:12:31.324 "data_size": 0 00:12:31.324 }, 00:12:31.324 { 00:12:31.324 "name": "BaseBdev3", 00:12:31.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.324 "is_configured": false, 00:12:31.324 "data_offset": 0, 00:12:31.324 "data_size": 0 00:12:31.324 }, 00:12:31.324 { 00:12:31.324 "name": "BaseBdev4", 00:12:31.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.324 "is_configured": false, 00:12:31.324 "data_offset": 0, 00:12:31.324 "data_size": 0 00:12:31.324 } 00:12:31.324 ] 00:12:31.324 }' 00:12:31.324 16:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.324 16:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.583 16:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:31.583 16:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.583 16:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.583 [2024-12-06 16:28:13.383240] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:31.583 BaseBdev2 00:12:31.583 16:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.583 16:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:31.583 16:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:31.583 16:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:31.583 16:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:31.583 16:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:31.583 16:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:31.583 16:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:31.583 16:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.583 16:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.583 16:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.583 16:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:31.583 16:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.583 16:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.583 [ 00:12:31.583 { 00:12:31.583 "name": "BaseBdev2", 00:12:31.583 "aliases": [ 00:12:31.583 "fd2c1268-9373-4703-8ceb-ec9460e9a2bb" 00:12:31.583 ], 00:12:31.583 "product_name": "Malloc disk", 00:12:31.583 "block_size": 512, 00:12:31.583 "num_blocks": 65536, 00:12:31.583 "uuid": "fd2c1268-9373-4703-8ceb-ec9460e9a2bb", 00:12:31.583 "assigned_rate_limits": { 00:12:31.583 "rw_ios_per_sec": 0, 00:12:31.583 "rw_mbytes_per_sec": 0, 00:12:31.583 "r_mbytes_per_sec": 0, 00:12:31.583 "w_mbytes_per_sec": 0 00:12:31.583 }, 00:12:31.583 "claimed": true, 00:12:31.583 "claim_type": "exclusive_write", 00:12:31.583 "zoned": false, 00:12:31.583 "supported_io_types": { 00:12:31.583 "read": true, 00:12:31.583 "write": true, 00:12:31.583 "unmap": true, 00:12:31.583 "flush": true, 00:12:31.583 "reset": true, 00:12:31.583 "nvme_admin": false, 00:12:31.583 "nvme_io": false, 00:12:31.583 "nvme_io_md": false, 00:12:31.584 "write_zeroes": true, 00:12:31.584 "zcopy": true, 00:12:31.584 "get_zone_info": false, 00:12:31.584 "zone_management": false, 00:12:31.584 "zone_append": false, 00:12:31.584 "compare": false, 00:12:31.584 "compare_and_write": false, 00:12:31.584 "abort": true, 00:12:31.584 "seek_hole": false, 00:12:31.584 "seek_data": false, 00:12:31.584 "copy": true, 00:12:31.584 "nvme_iov_md": false 00:12:31.584 }, 00:12:31.584 "memory_domains": [ 00:12:31.584 { 00:12:31.584 "dma_device_id": "system", 00:12:31.584 "dma_device_type": 1 00:12:31.584 }, 00:12:31.584 { 00:12:31.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.584 "dma_device_type": 2 00:12:31.584 } 00:12:31.584 ], 00:12:31.584 "driver_specific": {} 00:12:31.584 } 00:12:31.584 ] 00:12:31.844 16:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.844 16:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:31.844 16:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:31.844 16:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:31.844 16:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:31.844 16:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:31.844 16:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:31.844 16:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:31.844 16:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:31.844 16:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:31.844 16:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.844 16:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.844 16:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.844 16:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.844 16:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.844 16:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.844 16:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.844 16:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:31.844 16:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.844 16:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.844 "name": "Existed_Raid", 00:12:31.844 "uuid": "369dfe9a-555c-4f33-8594-4b6d7bbeee3c", 00:12:31.844 "strip_size_kb": 64, 00:12:31.844 "state": "configuring", 00:12:31.844 "raid_level": "raid0", 00:12:31.844 "superblock": true, 00:12:31.844 "num_base_bdevs": 4, 00:12:31.844 "num_base_bdevs_discovered": 2, 00:12:31.844 "num_base_bdevs_operational": 4, 00:12:31.844 "base_bdevs_list": [ 00:12:31.844 { 00:12:31.844 "name": "BaseBdev1", 00:12:31.844 "uuid": "238c423d-35e9-42c7-93b5-470d4201b43a", 00:12:31.844 "is_configured": true, 00:12:31.844 "data_offset": 2048, 00:12:31.844 "data_size": 63488 00:12:31.844 }, 00:12:31.844 { 00:12:31.844 "name": "BaseBdev2", 00:12:31.844 "uuid": "fd2c1268-9373-4703-8ceb-ec9460e9a2bb", 00:12:31.844 "is_configured": true, 00:12:31.844 "data_offset": 2048, 00:12:31.844 "data_size": 63488 00:12:31.844 }, 00:12:31.844 { 00:12:31.844 "name": "BaseBdev3", 00:12:31.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.844 "is_configured": false, 00:12:31.844 "data_offset": 0, 00:12:31.844 "data_size": 0 00:12:31.844 }, 00:12:31.844 { 00:12:31.844 "name": "BaseBdev4", 00:12:31.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.844 "is_configured": false, 00:12:31.844 "data_offset": 0, 00:12:31.844 "data_size": 0 00:12:31.844 } 00:12:31.844 ] 00:12:31.844 }' 00:12:31.844 16:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.844 16:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.103 16:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:32.103 16:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.103 16:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.103 [2024-12-06 16:28:13.851312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:32.103 BaseBdev3 00:12:32.103 16:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.103 16:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:32.103 16:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:32.103 16:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:32.103 16:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:32.103 16:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:32.103 16:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:32.103 16:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:32.103 16:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.103 16:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.103 16:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.103 16:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:32.103 16:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.103 16:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.103 [ 00:12:32.103 { 00:12:32.103 "name": "BaseBdev3", 00:12:32.103 "aliases": [ 00:12:32.103 "0e6394f7-2e94-45b1-8dee-a3c7b1089b2b" 00:12:32.103 ], 00:12:32.103 "product_name": "Malloc disk", 00:12:32.103 "block_size": 512, 00:12:32.103 "num_blocks": 65536, 00:12:32.103 "uuid": "0e6394f7-2e94-45b1-8dee-a3c7b1089b2b", 00:12:32.103 "assigned_rate_limits": { 00:12:32.103 "rw_ios_per_sec": 0, 00:12:32.103 "rw_mbytes_per_sec": 0, 00:12:32.103 "r_mbytes_per_sec": 0, 00:12:32.103 "w_mbytes_per_sec": 0 00:12:32.103 }, 00:12:32.103 "claimed": true, 00:12:32.103 "claim_type": "exclusive_write", 00:12:32.103 "zoned": false, 00:12:32.103 "supported_io_types": { 00:12:32.103 "read": true, 00:12:32.103 "write": true, 00:12:32.103 "unmap": true, 00:12:32.103 "flush": true, 00:12:32.103 "reset": true, 00:12:32.103 "nvme_admin": false, 00:12:32.103 "nvme_io": false, 00:12:32.103 "nvme_io_md": false, 00:12:32.103 "write_zeroes": true, 00:12:32.103 "zcopy": true, 00:12:32.103 "get_zone_info": false, 00:12:32.103 "zone_management": false, 00:12:32.103 "zone_append": false, 00:12:32.103 "compare": false, 00:12:32.103 "compare_and_write": false, 00:12:32.103 "abort": true, 00:12:32.103 "seek_hole": false, 00:12:32.103 "seek_data": false, 00:12:32.103 "copy": true, 00:12:32.103 "nvme_iov_md": false 00:12:32.103 }, 00:12:32.103 "memory_domains": [ 00:12:32.103 { 00:12:32.103 "dma_device_id": "system", 00:12:32.103 "dma_device_type": 1 00:12:32.103 }, 00:12:32.103 { 00:12:32.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.103 "dma_device_type": 2 00:12:32.103 } 00:12:32.103 ], 00:12:32.103 "driver_specific": {} 00:12:32.103 } 00:12:32.103 ] 00:12:32.103 16:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.103 16:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:32.103 16:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:32.103 16:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:32.103 16:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:32.103 16:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:32.104 16:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:32.104 16:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:32.104 16:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:32.104 16:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:32.104 16:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.104 16:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.104 16:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.104 16:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.104 16:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.104 16:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.104 16:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:32.104 16:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.104 16:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.364 16:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.364 "name": "Existed_Raid", 00:12:32.364 "uuid": "369dfe9a-555c-4f33-8594-4b6d7bbeee3c", 00:12:32.364 "strip_size_kb": 64, 00:12:32.364 "state": "configuring", 00:12:32.364 "raid_level": "raid0", 00:12:32.364 "superblock": true, 00:12:32.364 "num_base_bdevs": 4, 00:12:32.364 "num_base_bdevs_discovered": 3, 00:12:32.364 "num_base_bdevs_operational": 4, 00:12:32.364 "base_bdevs_list": [ 00:12:32.364 { 00:12:32.364 "name": "BaseBdev1", 00:12:32.364 "uuid": "238c423d-35e9-42c7-93b5-470d4201b43a", 00:12:32.364 "is_configured": true, 00:12:32.364 "data_offset": 2048, 00:12:32.364 "data_size": 63488 00:12:32.364 }, 00:12:32.364 { 00:12:32.364 "name": "BaseBdev2", 00:12:32.364 "uuid": "fd2c1268-9373-4703-8ceb-ec9460e9a2bb", 00:12:32.364 "is_configured": true, 00:12:32.364 "data_offset": 2048, 00:12:32.364 "data_size": 63488 00:12:32.364 }, 00:12:32.364 { 00:12:32.364 "name": "BaseBdev3", 00:12:32.364 "uuid": "0e6394f7-2e94-45b1-8dee-a3c7b1089b2b", 00:12:32.364 "is_configured": true, 00:12:32.364 "data_offset": 2048, 00:12:32.364 "data_size": 63488 00:12:32.364 }, 00:12:32.364 { 00:12:32.364 "name": "BaseBdev4", 00:12:32.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.364 "is_configured": false, 00:12:32.364 "data_offset": 0, 00:12:32.364 "data_size": 0 00:12:32.364 } 00:12:32.364 ] 00:12:32.364 }' 00:12:32.364 16:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.364 16:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.624 16:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:32.624 16:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.624 16:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.624 [2024-12-06 16:28:14.325766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:32.624 [2024-12-06 16:28:14.325989] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:12:32.624 [2024-12-06 16:28:14.326005] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:32.624 [2024-12-06 16:28:14.326343] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:32.624 [2024-12-06 16:28:14.326482] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:12:32.624 [2024-12-06 16:28:14.326496] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:12:32.624 BaseBdev4 00:12:32.624 [2024-12-06 16:28:14.326628] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:32.624 16:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.624 16:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:32.624 16:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:32.624 16:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:32.624 16:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:32.624 16:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:32.624 16:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:32.624 16:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:32.624 16:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.624 16:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.624 16:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.624 16:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:32.624 16:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.624 16:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.624 [ 00:12:32.624 { 00:12:32.624 "name": "BaseBdev4", 00:12:32.624 "aliases": [ 00:12:32.624 "d1e6bcc4-0b71-4ea3-b23e-23fcfb467512" 00:12:32.624 ], 00:12:32.624 "product_name": "Malloc disk", 00:12:32.624 "block_size": 512, 00:12:32.624 "num_blocks": 65536, 00:12:32.624 "uuid": "d1e6bcc4-0b71-4ea3-b23e-23fcfb467512", 00:12:32.624 "assigned_rate_limits": { 00:12:32.624 "rw_ios_per_sec": 0, 00:12:32.624 "rw_mbytes_per_sec": 0, 00:12:32.624 "r_mbytes_per_sec": 0, 00:12:32.624 "w_mbytes_per_sec": 0 00:12:32.624 }, 00:12:32.624 "claimed": true, 00:12:32.624 "claim_type": "exclusive_write", 00:12:32.624 "zoned": false, 00:12:32.624 "supported_io_types": { 00:12:32.624 "read": true, 00:12:32.624 "write": true, 00:12:32.624 "unmap": true, 00:12:32.624 "flush": true, 00:12:32.624 "reset": true, 00:12:32.624 "nvme_admin": false, 00:12:32.624 "nvme_io": false, 00:12:32.624 "nvme_io_md": false, 00:12:32.624 "write_zeroes": true, 00:12:32.624 "zcopy": true, 00:12:32.624 "get_zone_info": false, 00:12:32.624 "zone_management": false, 00:12:32.624 "zone_append": false, 00:12:32.624 "compare": false, 00:12:32.624 "compare_and_write": false, 00:12:32.624 "abort": true, 00:12:32.624 "seek_hole": false, 00:12:32.624 "seek_data": false, 00:12:32.624 "copy": true, 00:12:32.624 "nvme_iov_md": false 00:12:32.624 }, 00:12:32.624 "memory_domains": [ 00:12:32.624 { 00:12:32.624 "dma_device_id": "system", 00:12:32.624 "dma_device_type": 1 00:12:32.624 }, 00:12:32.624 { 00:12:32.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.624 "dma_device_type": 2 00:12:32.624 } 00:12:32.624 ], 00:12:32.624 "driver_specific": {} 00:12:32.624 } 00:12:32.624 ] 00:12:32.624 16:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.624 16:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:32.624 16:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:32.624 16:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:32.624 16:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:32.624 16:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:32.624 16:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:32.624 16:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:32.624 16:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:32.624 16:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:32.624 16:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.624 16:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.624 16:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.624 16:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.624 16:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.624 16:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:32.624 16:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.624 16:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.624 16:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.624 16:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.624 "name": "Existed_Raid", 00:12:32.624 "uuid": "369dfe9a-555c-4f33-8594-4b6d7bbeee3c", 00:12:32.624 "strip_size_kb": 64, 00:12:32.624 "state": "online", 00:12:32.624 "raid_level": "raid0", 00:12:32.625 "superblock": true, 00:12:32.625 "num_base_bdevs": 4, 00:12:32.625 "num_base_bdevs_discovered": 4, 00:12:32.625 "num_base_bdevs_operational": 4, 00:12:32.625 "base_bdevs_list": [ 00:12:32.625 { 00:12:32.625 "name": "BaseBdev1", 00:12:32.625 "uuid": "238c423d-35e9-42c7-93b5-470d4201b43a", 00:12:32.625 "is_configured": true, 00:12:32.625 "data_offset": 2048, 00:12:32.625 "data_size": 63488 00:12:32.625 }, 00:12:32.625 { 00:12:32.625 "name": "BaseBdev2", 00:12:32.625 "uuid": "fd2c1268-9373-4703-8ceb-ec9460e9a2bb", 00:12:32.625 "is_configured": true, 00:12:32.625 "data_offset": 2048, 00:12:32.625 "data_size": 63488 00:12:32.625 }, 00:12:32.625 { 00:12:32.625 "name": "BaseBdev3", 00:12:32.625 "uuid": "0e6394f7-2e94-45b1-8dee-a3c7b1089b2b", 00:12:32.625 "is_configured": true, 00:12:32.625 "data_offset": 2048, 00:12:32.625 "data_size": 63488 00:12:32.625 }, 00:12:32.625 { 00:12:32.625 "name": "BaseBdev4", 00:12:32.625 "uuid": "d1e6bcc4-0b71-4ea3-b23e-23fcfb467512", 00:12:32.625 "is_configured": true, 00:12:32.625 "data_offset": 2048, 00:12:32.625 "data_size": 63488 00:12:32.625 } 00:12:32.625 ] 00:12:32.625 }' 00:12:32.625 16:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.625 16:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.192 16:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:33.192 16:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:33.192 16:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:33.192 16:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:33.192 16:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:33.192 16:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:33.192 16:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:33.192 16:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:33.192 16:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.192 16:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.192 [2024-12-06 16:28:14.805377] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:33.192 16:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.192 16:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:33.192 "name": "Existed_Raid", 00:12:33.192 "aliases": [ 00:12:33.192 "369dfe9a-555c-4f33-8594-4b6d7bbeee3c" 00:12:33.192 ], 00:12:33.192 "product_name": "Raid Volume", 00:12:33.192 "block_size": 512, 00:12:33.192 "num_blocks": 253952, 00:12:33.192 "uuid": "369dfe9a-555c-4f33-8594-4b6d7bbeee3c", 00:12:33.192 "assigned_rate_limits": { 00:12:33.192 "rw_ios_per_sec": 0, 00:12:33.192 "rw_mbytes_per_sec": 0, 00:12:33.192 "r_mbytes_per_sec": 0, 00:12:33.192 "w_mbytes_per_sec": 0 00:12:33.192 }, 00:12:33.192 "claimed": false, 00:12:33.192 "zoned": false, 00:12:33.192 "supported_io_types": { 00:12:33.192 "read": true, 00:12:33.192 "write": true, 00:12:33.192 "unmap": true, 00:12:33.192 "flush": true, 00:12:33.192 "reset": true, 00:12:33.192 "nvme_admin": false, 00:12:33.192 "nvme_io": false, 00:12:33.192 "nvme_io_md": false, 00:12:33.192 "write_zeroes": true, 00:12:33.192 "zcopy": false, 00:12:33.192 "get_zone_info": false, 00:12:33.192 "zone_management": false, 00:12:33.192 "zone_append": false, 00:12:33.192 "compare": false, 00:12:33.192 "compare_and_write": false, 00:12:33.192 "abort": false, 00:12:33.192 "seek_hole": false, 00:12:33.192 "seek_data": false, 00:12:33.192 "copy": false, 00:12:33.192 "nvme_iov_md": false 00:12:33.192 }, 00:12:33.192 "memory_domains": [ 00:12:33.192 { 00:12:33.192 "dma_device_id": "system", 00:12:33.192 "dma_device_type": 1 00:12:33.192 }, 00:12:33.192 { 00:12:33.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.192 "dma_device_type": 2 00:12:33.192 }, 00:12:33.192 { 00:12:33.192 "dma_device_id": "system", 00:12:33.192 "dma_device_type": 1 00:12:33.192 }, 00:12:33.192 { 00:12:33.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.192 "dma_device_type": 2 00:12:33.192 }, 00:12:33.192 { 00:12:33.192 "dma_device_id": "system", 00:12:33.192 "dma_device_type": 1 00:12:33.192 }, 00:12:33.192 { 00:12:33.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.192 "dma_device_type": 2 00:12:33.192 }, 00:12:33.192 { 00:12:33.192 "dma_device_id": "system", 00:12:33.192 "dma_device_type": 1 00:12:33.192 }, 00:12:33.192 { 00:12:33.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.192 "dma_device_type": 2 00:12:33.192 } 00:12:33.192 ], 00:12:33.192 "driver_specific": { 00:12:33.192 "raid": { 00:12:33.192 "uuid": "369dfe9a-555c-4f33-8594-4b6d7bbeee3c", 00:12:33.192 "strip_size_kb": 64, 00:12:33.192 "state": "online", 00:12:33.193 "raid_level": "raid0", 00:12:33.193 "superblock": true, 00:12:33.193 "num_base_bdevs": 4, 00:12:33.193 "num_base_bdevs_discovered": 4, 00:12:33.193 "num_base_bdevs_operational": 4, 00:12:33.193 "base_bdevs_list": [ 00:12:33.193 { 00:12:33.193 "name": "BaseBdev1", 00:12:33.193 "uuid": "238c423d-35e9-42c7-93b5-470d4201b43a", 00:12:33.193 "is_configured": true, 00:12:33.193 "data_offset": 2048, 00:12:33.193 "data_size": 63488 00:12:33.193 }, 00:12:33.193 { 00:12:33.193 "name": "BaseBdev2", 00:12:33.193 "uuid": "fd2c1268-9373-4703-8ceb-ec9460e9a2bb", 00:12:33.193 "is_configured": true, 00:12:33.193 "data_offset": 2048, 00:12:33.193 "data_size": 63488 00:12:33.193 }, 00:12:33.193 { 00:12:33.193 "name": "BaseBdev3", 00:12:33.193 "uuid": "0e6394f7-2e94-45b1-8dee-a3c7b1089b2b", 00:12:33.193 "is_configured": true, 00:12:33.193 "data_offset": 2048, 00:12:33.193 "data_size": 63488 00:12:33.193 }, 00:12:33.193 { 00:12:33.193 "name": "BaseBdev4", 00:12:33.193 "uuid": "d1e6bcc4-0b71-4ea3-b23e-23fcfb467512", 00:12:33.193 "is_configured": true, 00:12:33.193 "data_offset": 2048, 00:12:33.193 "data_size": 63488 00:12:33.193 } 00:12:33.193 ] 00:12:33.193 } 00:12:33.193 } 00:12:33.193 }' 00:12:33.193 16:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:33.193 16:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:33.193 BaseBdev2 00:12:33.193 BaseBdev3 00:12:33.193 BaseBdev4' 00:12:33.193 16:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:33.193 16:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:33.193 16:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:33.193 16:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:33.193 16:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.193 16:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.193 16:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:33.193 16:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.193 16:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:33.193 16:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:33.193 16:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:33.193 16:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:33.193 16:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.193 16:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.193 16:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:33.193 16:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.193 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:33.193 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:33.193 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:33.193 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:33.453 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:33.453 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.453 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.453 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.453 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:33.453 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:33.453 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:33.453 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:33.453 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:33.453 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.453 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.453 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.453 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:33.453 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:33.453 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:33.453 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.453 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.453 [2024-12-06 16:28:15.108523] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:33.453 [2024-12-06 16:28:15.108596] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:33.453 [2024-12-06 16:28:15.108708] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:33.453 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.453 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:33.453 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:12:33.453 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:33.453 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:33.453 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:33.453 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:12:33.453 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:33.453 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:33.453 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:33.453 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:33.453 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:33.453 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.453 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.453 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.453 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.453 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.453 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.453 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.453 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.453 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.453 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.453 "name": "Existed_Raid", 00:12:33.453 "uuid": "369dfe9a-555c-4f33-8594-4b6d7bbeee3c", 00:12:33.453 "strip_size_kb": 64, 00:12:33.453 "state": "offline", 00:12:33.453 "raid_level": "raid0", 00:12:33.453 "superblock": true, 00:12:33.453 "num_base_bdevs": 4, 00:12:33.453 "num_base_bdevs_discovered": 3, 00:12:33.453 "num_base_bdevs_operational": 3, 00:12:33.453 "base_bdevs_list": [ 00:12:33.453 { 00:12:33.453 "name": null, 00:12:33.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.453 "is_configured": false, 00:12:33.453 "data_offset": 0, 00:12:33.453 "data_size": 63488 00:12:33.453 }, 00:12:33.453 { 00:12:33.453 "name": "BaseBdev2", 00:12:33.453 "uuid": "fd2c1268-9373-4703-8ceb-ec9460e9a2bb", 00:12:33.453 "is_configured": true, 00:12:33.453 "data_offset": 2048, 00:12:33.453 "data_size": 63488 00:12:33.453 }, 00:12:33.453 { 00:12:33.453 "name": "BaseBdev3", 00:12:33.453 "uuid": "0e6394f7-2e94-45b1-8dee-a3c7b1089b2b", 00:12:33.453 "is_configured": true, 00:12:33.453 "data_offset": 2048, 00:12:33.453 "data_size": 63488 00:12:33.453 }, 00:12:33.453 { 00:12:33.453 "name": "BaseBdev4", 00:12:33.453 "uuid": "d1e6bcc4-0b71-4ea3-b23e-23fcfb467512", 00:12:33.453 "is_configured": true, 00:12:33.453 "data_offset": 2048, 00:12:33.453 "data_size": 63488 00:12:33.453 } 00:12:33.453 ] 00:12:33.453 }' 00:12:33.453 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.453 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.021 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:34.021 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:34.021 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.021 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:34.021 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.021 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.021 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.021 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:34.021 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:34.021 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:34.021 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.021 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.021 [2024-12-06 16:28:15.618980] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:34.021 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.021 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:34.021 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:34.021 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.021 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.022 [2024-12-06 16:28:15.686193] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.022 [2024-12-06 16:28:15.737267] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:34.022 [2024-12-06 16:28:15.737314] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.022 BaseBdev2 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.022 [ 00:12:34.022 { 00:12:34.022 "name": "BaseBdev2", 00:12:34.022 "aliases": [ 00:12:34.022 "7f9cc932-43b8-4152-aaea-83650899dbc5" 00:12:34.022 ], 00:12:34.022 "product_name": "Malloc disk", 00:12:34.022 "block_size": 512, 00:12:34.022 "num_blocks": 65536, 00:12:34.022 "uuid": "7f9cc932-43b8-4152-aaea-83650899dbc5", 00:12:34.022 "assigned_rate_limits": { 00:12:34.022 "rw_ios_per_sec": 0, 00:12:34.022 "rw_mbytes_per_sec": 0, 00:12:34.022 "r_mbytes_per_sec": 0, 00:12:34.022 "w_mbytes_per_sec": 0 00:12:34.022 }, 00:12:34.022 "claimed": false, 00:12:34.022 "zoned": false, 00:12:34.022 "supported_io_types": { 00:12:34.022 "read": true, 00:12:34.022 "write": true, 00:12:34.022 "unmap": true, 00:12:34.022 "flush": true, 00:12:34.022 "reset": true, 00:12:34.022 "nvme_admin": false, 00:12:34.022 "nvme_io": false, 00:12:34.022 "nvme_io_md": false, 00:12:34.022 "write_zeroes": true, 00:12:34.022 "zcopy": true, 00:12:34.022 "get_zone_info": false, 00:12:34.022 "zone_management": false, 00:12:34.022 "zone_append": false, 00:12:34.022 "compare": false, 00:12:34.022 "compare_and_write": false, 00:12:34.022 "abort": true, 00:12:34.022 "seek_hole": false, 00:12:34.022 "seek_data": false, 00:12:34.022 "copy": true, 00:12:34.022 "nvme_iov_md": false 00:12:34.022 }, 00:12:34.022 "memory_domains": [ 00:12:34.022 { 00:12:34.022 "dma_device_id": "system", 00:12:34.022 "dma_device_type": 1 00:12:34.022 }, 00:12:34.022 { 00:12:34.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.022 "dma_device_type": 2 00:12:34.022 } 00:12:34.022 ], 00:12:34.022 "driver_specific": {} 00:12:34.022 } 00:12:34.022 ] 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:34.022 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.023 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.282 BaseBdev3 00:12:34.282 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.282 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:34.282 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:34.282 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:34.282 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:34.282 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:34.282 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:34.282 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:34.282 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.282 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.282 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.282 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:34.282 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.282 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.282 [ 00:12:34.282 { 00:12:34.282 "name": "BaseBdev3", 00:12:34.282 "aliases": [ 00:12:34.282 "8f556cec-57c8-4058-9dd4-1809eb251cc5" 00:12:34.282 ], 00:12:34.282 "product_name": "Malloc disk", 00:12:34.282 "block_size": 512, 00:12:34.282 "num_blocks": 65536, 00:12:34.282 "uuid": "8f556cec-57c8-4058-9dd4-1809eb251cc5", 00:12:34.282 "assigned_rate_limits": { 00:12:34.282 "rw_ios_per_sec": 0, 00:12:34.282 "rw_mbytes_per_sec": 0, 00:12:34.282 "r_mbytes_per_sec": 0, 00:12:34.282 "w_mbytes_per_sec": 0 00:12:34.282 }, 00:12:34.282 "claimed": false, 00:12:34.282 "zoned": false, 00:12:34.282 "supported_io_types": { 00:12:34.282 "read": true, 00:12:34.282 "write": true, 00:12:34.282 "unmap": true, 00:12:34.282 "flush": true, 00:12:34.282 "reset": true, 00:12:34.282 "nvme_admin": false, 00:12:34.282 "nvme_io": false, 00:12:34.282 "nvme_io_md": false, 00:12:34.282 "write_zeroes": true, 00:12:34.282 "zcopy": true, 00:12:34.282 "get_zone_info": false, 00:12:34.282 "zone_management": false, 00:12:34.282 "zone_append": false, 00:12:34.282 "compare": false, 00:12:34.282 "compare_and_write": false, 00:12:34.282 "abort": true, 00:12:34.282 "seek_hole": false, 00:12:34.282 "seek_data": false, 00:12:34.282 "copy": true, 00:12:34.282 "nvme_iov_md": false 00:12:34.282 }, 00:12:34.282 "memory_domains": [ 00:12:34.282 { 00:12:34.282 "dma_device_id": "system", 00:12:34.282 "dma_device_type": 1 00:12:34.282 }, 00:12:34.282 { 00:12:34.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.282 "dma_device_type": 2 00:12:34.282 } 00:12:34.282 ], 00:12:34.282 "driver_specific": {} 00:12:34.282 } 00:12:34.282 ] 00:12:34.282 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.282 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:34.282 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:34.282 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:34.282 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:34.282 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.282 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.282 BaseBdev4 00:12:34.282 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.282 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:34.282 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:34.282 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:34.282 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:34.282 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:34.282 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:34.282 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:34.282 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.282 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.282 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.282 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:34.282 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.282 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.282 [ 00:12:34.282 { 00:12:34.282 "name": "BaseBdev4", 00:12:34.282 "aliases": [ 00:12:34.282 "85a56727-244d-4430-85af-f25660fba723" 00:12:34.282 ], 00:12:34.282 "product_name": "Malloc disk", 00:12:34.282 "block_size": 512, 00:12:34.282 "num_blocks": 65536, 00:12:34.282 "uuid": "85a56727-244d-4430-85af-f25660fba723", 00:12:34.282 "assigned_rate_limits": { 00:12:34.283 "rw_ios_per_sec": 0, 00:12:34.283 "rw_mbytes_per_sec": 0, 00:12:34.283 "r_mbytes_per_sec": 0, 00:12:34.283 "w_mbytes_per_sec": 0 00:12:34.283 }, 00:12:34.283 "claimed": false, 00:12:34.283 "zoned": false, 00:12:34.283 "supported_io_types": { 00:12:34.283 "read": true, 00:12:34.283 "write": true, 00:12:34.283 "unmap": true, 00:12:34.283 "flush": true, 00:12:34.283 "reset": true, 00:12:34.283 "nvme_admin": false, 00:12:34.283 "nvme_io": false, 00:12:34.283 "nvme_io_md": false, 00:12:34.283 "write_zeroes": true, 00:12:34.283 "zcopy": true, 00:12:34.283 "get_zone_info": false, 00:12:34.283 "zone_management": false, 00:12:34.283 "zone_append": false, 00:12:34.283 "compare": false, 00:12:34.283 "compare_and_write": false, 00:12:34.283 "abort": true, 00:12:34.283 "seek_hole": false, 00:12:34.283 "seek_data": false, 00:12:34.283 "copy": true, 00:12:34.283 "nvme_iov_md": false 00:12:34.283 }, 00:12:34.283 "memory_domains": [ 00:12:34.283 { 00:12:34.283 "dma_device_id": "system", 00:12:34.283 "dma_device_type": 1 00:12:34.283 }, 00:12:34.283 { 00:12:34.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.283 "dma_device_type": 2 00:12:34.283 } 00:12:34.283 ], 00:12:34.283 "driver_specific": {} 00:12:34.283 } 00:12:34.283 ] 00:12:34.283 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.283 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:34.283 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:34.283 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:34.283 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:34.283 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.283 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.283 [2024-12-06 16:28:15.970513] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:34.283 [2024-12-06 16:28:15.970615] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:34.283 [2024-12-06 16:28:15.970662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:34.283 [2024-12-06 16:28:15.972691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:34.283 [2024-12-06 16:28:15.972798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:34.283 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.283 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:34.283 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:34.283 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:34.283 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:34.283 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:34.283 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:34.283 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.283 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.283 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.283 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.283 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.283 16:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.283 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.283 16:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.283 16:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.283 16:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.283 "name": "Existed_Raid", 00:12:34.283 "uuid": "68e5272e-a8c7-4e7b-bc9e-ad448a2d45cf", 00:12:34.283 "strip_size_kb": 64, 00:12:34.283 "state": "configuring", 00:12:34.283 "raid_level": "raid0", 00:12:34.283 "superblock": true, 00:12:34.283 "num_base_bdevs": 4, 00:12:34.283 "num_base_bdevs_discovered": 3, 00:12:34.283 "num_base_bdevs_operational": 4, 00:12:34.283 "base_bdevs_list": [ 00:12:34.283 { 00:12:34.283 "name": "BaseBdev1", 00:12:34.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.283 "is_configured": false, 00:12:34.283 "data_offset": 0, 00:12:34.283 "data_size": 0 00:12:34.283 }, 00:12:34.283 { 00:12:34.283 "name": "BaseBdev2", 00:12:34.283 "uuid": "7f9cc932-43b8-4152-aaea-83650899dbc5", 00:12:34.283 "is_configured": true, 00:12:34.283 "data_offset": 2048, 00:12:34.283 "data_size": 63488 00:12:34.283 }, 00:12:34.283 { 00:12:34.283 "name": "BaseBdev3", 00:12:34.283 "uuid": "8f556cec-57c8-4058-9dd4-1809eb251cc5", 00:12:34.283 "is_configured": true, 00:12:34.283 "data_offset": 2048, 00:12:34.283 "data_size": 63488 00:12:34.283 }, 00:12:34.283 { 00:12:34.283 "name": "BaseBdev4", 00:12:34.283 "uuid": "85a56727-244d-4430-85af-f25660fba723", 00:12:34.283 "is_configured": true, 00:12:34.283 "data_offset": 2048, 00:12:34.283 "data_size": 63488 00:12:34.283 } 00:12:34.283 ] 00:12:34.283 }' 00:12:34.283 16:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.283 16:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.851 16:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:34.851 16:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.851 16:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.851 [2024-12-06 16:28:16.445695] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:34.851 16:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.851 16:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:34.851 16:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:34.851 16:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:34.851 16:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:34.851 16:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:34.851 16:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:34.851 16:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.851 16:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.851 16:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.851 16:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.851 16:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.851 16:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.851 16:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.851 16:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.851 16:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.851 16:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.851 "name": "Existed_Raid", 00:12:34.851 "uuid": "68e5272e-a8c7-4e7b-bc9e-ad448a2d45cf", 00:12:34.851 "strip_size_kb": 64, 00:12:34.851 "state": "configuring", 00:12:34.851 "raid_level": "raid0", 00:12:34.851 "superblock": true, 00:12:34.851 "num_base_bdevs": 4, 00:12:34.851 "num_base_bdevs_discovered": 2, 00:12:34.851 "num_base_bdevs_operational": 4, 00:12:34.851 "base_bdevs_list": [ 00:12:34.851 { 00:12:34.851 "name": "BaseBdev1", 00:12:34.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.851 "is_configured": false, 00:12:34.851 "data_offset": 0, 00:12:34.851 "data_size": 0 00:12:34.851 }, 00:12:34.851 { 00:12:34.851 "name": null, 00:12:34.851 "uuid": "7f9cc932-43b8-4152-aaea-83650899dbc5", 00:12:34.851 "is_configured": false, 00:12:34.851 "data_offset": 0, 00:12:34.851 "data_size": 63488 00:12:34.851 }, 00:12:34.851 { 00:12:34.851 "name": "BaseBdev3", 00:12:34.851 "uuid": "8f556cec-57c8-4058-9dd4-1809eb251cc5", 00:12:34.851 "is_configured": true, 00:12:34.851 "data_offset": 2048, 00:12:34.851 "data_size": 63488 00:12:34.851 }, 00:12:34.851 { 00:12:34.851 "name": "BaseBdev4", 00:12:34.851 "uuid": "85a56727-244d-4430-85af-f25660fba723", 00:12:34.851 "is_configured": true, 00:12:34.851 "data_offset": 2048, 00:12:34.851 "data_size": 63488 00:12:34.851 } 00:12:34.851 ] 00:12:34.851 }' 00:12:34.851 16:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.851 16:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.110 16:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.110 16:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.110 16:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.110 16:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:35.110 16:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.369 16:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:35.369 16:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:35.369 16:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.369 16:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.369 [2024-12-06 16:28:16.983911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:35.369 BaseBdev1 00:12:35.369 16:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.369 16:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:35.369 16:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:35.369 16:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:35.369 16:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:35.369 16:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:35.369 16:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:35.369 16:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:35.369 16:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.369 16:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.369 16:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.369 16:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:35.369 16:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.369 16:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.369 [ 00:12:35.369 { 00:12:35.369 "name": "BaseBdev1", 00:12:35.369 "aliases": [ 00:12:35.369 "6ff46224-5552-462f-9fc7-5214a0a55317" 00:12:35.369 ], 00:12:35.369 "product_name": "Malloc disk", 00:12:35.369 "block_size": 512, 00:12:35.369 "num_blocks": 65536, 00:12:35.369 "uuid": "6ff46224-5552-462f-9fc7-5214a0a55317", 00:12:35.369 "assigned_rate_limits": { 00:12:35.369 "rw_ios_per_sec": 0, 00:12:35.369 "rw_mbytes_per_sec": 0, 00:12:35.369 "r_mbytes_per_sec": 0, 00:12:35.369 "w_mbytes_per_sec": 0 00:12:35.369 }, 00:12:35.369 "claimed": true, 00:12:35.369 "claim_type": "exclusive_write", 00:12:35.369 "zoned": false, 00:12:35.369 "supported_io_types": { 00:12:35.369 "read": true, 00:12:35.369 "write": true, 00:12:35.369 "unmap": true, 00:12:35.369 "flush": true, 00:12:35.369 "reset": true, 00:12:35.369 "nvme_admin": false, 00:12:35.369 "nvme_io": false, 00:12:35.369 "nvme_io_md": false, 00:12:35.369 "write_zeroes": true, 00:12:35.369 "zcopy": true, 00:12:35.369 "get_zone_info": false, 00:12:35.369 "zone_management": false, 00:12:35.369 "zone_append": false, 00:12:35.369 "compare": false, 00:12:35.369 "compare_and_write": false, 00:12:35.369 "abort": true, 00:12:35.369 "seek_hole": false, 00:12:35.369 "seek_data": false, 00:12:35.369 "copy": true, 00:12:35.369 "nvme_iov_md": false 00:12:35.369 }, 00:12:35.369 "memory_domains": [ 00:12:35.369 { 00:12:35.369 "dma_device_id": "system", 00:12:35.369 "dma_device_type": 1 00:12:35.369 }, 00:12:35.369 { 00:12:35.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.369 "dma_device_type": 2 00:12:35.369 } 00:12:35.369 ], 00:12:35.369 "driver_specific": {} 00:12:35.369 } 00:12:35.369 ] 00:12:35.369 16:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.369 16:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:35.369 16:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:35.369 16:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:35.369 16:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:35.369 16:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:35.369 16:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:35.369 16:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:35.369 16:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.369 16:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.369 16:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.369 16:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.369 16:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.369 16:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:35.369 16:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.369 16:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.369 16:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.369 16:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.369 "name": "Existed_Raid", 00:12:35.369 "uuid": "68e5272e-a8c7-4e7b-bc9e-ad448a2d45cf", 00:12:35.369 "strip_size_kb": 64, 00:12:35.369 "state": "configuring", 00:12:35.369 "raid_level": "raid0", 00:12:35.369 "superblock": true, 00:12:35.369 "num_base_bdevs": 4, 00:12:35.369 "num_base_bdevs_discovered": 3, 00:12:35.369 "num_base_bdevs_operational": 4, 00:12:35.369 "base_bdevs_list": [ 00:12:35.369 { 00:12:35.369 "name": "BaseBdev1", 00:12:35.369 "uuid": "6ff46224-5552-462f-9fc7-5214a0a55317", 00:12:35.369 "is_configured": true, 00:12:35.369 "data_offset": 2048, 00:12:35.369 "data_size": 63488 00:12:35.369 }, 00:12:35.369 { 00:12:35.369 "name": null, 00:12:35.369 "uuid": "7f9cc932-43b8-4152-aaea-83650899dbc5", 00:12:35.369 "is_configured": false, 00:12:35.369 "data_offset": 0, 00:12:35.369 "data_size": 63488 00:12:35.369 }, 00:12:35.369 { 00:12:35.369 "name": "BaseBdev3", 00:12:35.369 "uuid": "8f556cec-57c8-4058-9dd4-1809eb251cc5", 00:12:35.369 "is_configured": true, 00:12:35.369 "data_offset": 2048, 00:12:35.369 "data_size": 63488 00:12:35.369 }, 00:12:35.369 { 00:12:35.369 "name": "BaseBdev4", 00:12:35.369 "uuid": "85a56727-244d-4430-85af-f25660fba723", 00:12:35.369 "is_configured": true, 00:12:35.369 "data_offset": 2048, 00:12:35.369 "data_size": 63488 00:12:35.369 } 00:12:35.369 ] 00:12:35.369 }' 00:12:35.369 16:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.369 16:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.977 16:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.977 16:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:35.977 16:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.977 16:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.977 16:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.977 16:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:35.977 16:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:35.977 16:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.977 16:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.977 [2024-12-06 16:28:17.567001] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:35.977 16:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.977 16:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:35.977 16:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:35.977 16:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:35.977 16:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:35.977 16:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:35.977 16:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:35.977 16:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.977 16:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.977 16:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.977 16:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.977 16:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.977 16:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:35.977 16:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.977 16:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.977 16:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.977 16:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.977 "name": "Existed_Raid", 00:12:35.977 "uuid": "68e5272e-a8c7-4e7b-bc9e-ad448a2d45cf", 00:12:35.977 "strip_size_kb": 64, 00:12:35.977 "state": "configuring", 00:12:35.977 "raid_level": "raid0", 00:12:35.977 "superblock": true, 00:12:35.977 "num_base_bdevs": 4, 00:12:35.977 "num_base_bdevs_discovered": 2, 00:12:35.977 "num_base_bdevs_operational": 4, 00:12:35.977 "base_bdevs_list": [ 00:12:35.977 { 00:12:35.977 "name": "BaseBdev1", 00:12:35.977 "uuid": "6ff46224-5552-462f-9fc7-5214a0a55317", 00:12:35.977 "is_configured": true, 00:12:35.977 "data_offset": 2048, 00:12:35.977 "data_size": 63488 00:12:35.977 }, 00:12:35.977 { 00:12:35.977 "name": null, 00:12:35.977 "uuid": "7f9cc932-43b8-4152-aaea-83650899dbc5", 00:12:35.977 "is_configured": false, 00:12:35.977 "data_offset": 0, 00:12:35.977 "data_size": 63488 00:12:35.977 }, 00:12:35.977 { 00:12:35.977 "name": null, 00:12:35.977 "uuid": "8f556cec-57c8-4058-9dd4-1809eb251cc5", 00:12:35.977 "is_configured": false, 00:12:35.977 "data_offset": 0, 00:12:35.977 "data_size": 63488 00:12:35.977 }, 00:12:35.977 { 00:12:35.977 "name": "BaseBdev4", 00:12:35.977 "uuid": "85a56727-244d-4430-85af-f25660fba723", 00:12:35.977 "is_configured": true, 00:12:35.977 "data_offset": 2048, 00:12:35.977 "data_size": 63488 00:12:35.977 } 00:12:35.977 ] 00:12:35.977 }' 00:12:35.977 16:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.977 16:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.237 16:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.237 16:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.237 16:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.237 16:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:36.237 16:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.496 16:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:36.496 16:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:36.496 16:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.496 16:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.496 [2024-12-06 16:28:18.090179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:36.496 16:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.496 16:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:36.496 16:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:36.496 16:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:36.496 16:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:36.496 16:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:36.496 16:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:36.496 16:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.496 16:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.496 16:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.496 16:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.496 16:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.496 16:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:36.496 16:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.496 16:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.496 16:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.496 16:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.496 "name": "Existed_Raid", 00:12:36.496 "uuid": "68e5272e-a8c7-4e7b-bc9e-ad448a2d45cf", 00:12:36.496 "strip_size_kb": 64, 00:12:36.496 "state": "configuring", 00:12:36.496 "raid_level": "raid0", 00:12:36.496 "superblock": true, 00:12:36.496 "num_base_bdevs": 4, 00:12:36.496 "num_base_bdevs_discovered": 3, 00:12:36.496 "num_base_bdevs_operational": 4, 00:12:36.496 "base_bdevs_list": [ 00:12:36.496 { 00:12:36.496 "name": "BaseBdev1", 00:12:36.496 "uuid": "6ff46224-5552-462f-9fc7-5214a0a55317", 00:12:36.496 "is_configured": true, 00:12:36.496 "data_offset": 2048, 00:12:36.496 "data_size": 63488 00:12:36.496 }, 00:12:36.496 { 00:12:36.496 "name": null, 00:12:36.496 "uuid": "7f9cc932-43b8-4152-aaea-83650899dbc5", 00:12:36.496 "is_configured": false, 00:12:36.496 "data_offset": 0, 00:12:36.496 "data_size": 63488 00:12:36.496 }, 00:12:36.496 { 00:12:36.496 "name": "BaseBdev3", 00:12:36.496 "uuid": "8f556cec-57c8-4058-9dd4-1809eb251cc5", 00:12:36.496 "is_configured": true, 00:12:36.496 "data_offset": 2048, 00:12:36.496 "data_size": 63488 00:12:36.496 }, 00:12:36.496 { 00:12:36.496 "name": "BaseBdev4", 00:12:36.496 "uuid": "85a56727-244d-4430-85af-f25660fba723", 00:12:36.496 "is_configured": true, 00:12:36.496 "data_offset": 2048, 00:12:36.496 "data_size": 63488 00:12:36.496 } 00:12:36.496 ] 00:12:36.496 }' 00:12:36.496 16:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.496 16:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.756 16:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.756 16:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:36.756 16:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.756 16:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.756 16:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.756 16:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:36.756 16:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:36.756 16:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.756 16:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.756 [2024-12-06 16:28:18.569352] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:36.756 16:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.756 16:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:36.756 16:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:36.756 16:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:36.756 16:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:36.756 16:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:36.756 16:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:36.756 16:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.756 16:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.756 16:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.756 16:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.756 16:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.756 16:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:36.756 16:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.756 16:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.015 16:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.016 16:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.016 "name": "Existed_Raid", 00:12:37.016 "uuid": "68e5272e-a8c7-4e7b-bc9e-ad448a2d45cf", 00:12:37.016 "strip_size_kb": 64, 00:12:37.016 "state": "configuring", 00:12:37.016 "raid_level": "raid0", 00:12:37.016 "superblock": true, 00:12:37.016 "num_base_bdevs": 4, 00:12:37.016 "num_base_bdevs_discovered": 2, 00:12:37.016 "num_base_bdevs_operational": 4, 00:12:37.016 "base_bdevs_list": [ 00:12:37.016 { 00:12:37.016 "name": null, 00:12:37.016 "uuid": "6ff46224-5552-462f-9fc7-5214a0a55317", 00:12:37.016 "is_configured": false, 00:12:37.016 "data_offset": 0, 00:12:37.016 "data_size": 63488 00:12:37.016 }, 00:12:37.016 { 00:12:37.016 "name": null, 00:12:37.016 "uuid": "7f9cc932-43b8-4152-aaea-83650899dbc5", 00:12:37.016 "is_configured": false, 00:12:37.016 "data_offset": 0, 00:12:37.016 "data_size": 63488 00:12:37.016 }, 00:12:37.016 { 00:12:37.016 "name": "BaseBdev3", 00:12:37.016 "uuid": "8f556cec-57c8-4058-9dd4-1809eb251cc5", 00:12:37.016 "is_configured": true, 00:12:37.016 "data_offset": 2048, 00:12:37.016 "data_size": 63488 00:12:37.016 }, 00:12:37.016 { 00:12:37.016 "name": "BaseBdev4", 00:12:37.016 "uuid": "85a56727-244d-4430-85af-f25660fba723", 00:12:37.016 "is_configured": true, 00:12:37.016 "data_offset": 2048, 00:12:37.016 "data_size": 63488 00:12:37.016 } 00:12:37.016 ] 00:12:37.016 }' 00:12:37.016 16:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.016 16:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.276 16:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:37.276 16:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.276 16:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.276 16:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.276 16:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.276 16:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:37.276 16:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:37.276 16:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.276 16:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.276 [2024-12-06 16:28:19.103037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:37.276 16:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.276 16:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:37.276 16:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:37.276 16:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:37.276 16:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:37.276 16:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:37.276 16:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:37.276 16:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.276 16:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.276 16:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.276 16:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.536 16:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.536 16:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:37.536 16:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.536 16:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.536 16:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.536 16:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.536 "name": "Existed_Raid", 00:12:37.536 "uuid": "68e5272e-a8c7-4e7b-bc9e-ad448a2d45cf", 00:12:37.536 "strip_size_kb": 64, 00:12:37.536 "state": "configuring", 00:12:37.536 "raid_level": "raid0", 00:12:37.536 "superblock": true, 00:12:37.536 "num_base_bdevs": 4, 00:12:37.536 "num_base_bdevs_discovered": 3, 00:12:37.536 "num_base_bdevs_operational": 4, 00:12:37.536 "base_bdevs_list": [ 00:12:37.536 { 00:12:37.536 "name": null, 00:12:37.536 "uuid": "6ff46224-5552-462f-9fc7-5214a0a55317", 00:12:37.536 "is_configured": false, 00:12:37.536 "data_offset": 0, 00:12:37.536 "data_size": 63488 00:12:37.536 }, 00:12:37.536 { 00:12:37.536 "name": "BaseBdev2", 00:12:37.536 "uuid": "7f9cc932-43b8-4152-aaea-83650899dbc5", 00:12:37.536 "is_configured": true, 00:12:37.536 "data_offset": 2048, 00:12:37.536 "data_size": 63488 00:12:37.536 }, 00:12:37.536 { 00:12:37.536 "name": "BaseBdev3", 00:12:37.536 "uuid": "8f556cec-57c8-4058-9dd4-1809eb251cc5", 00:12:37.536 "is_configured": true, 00:12:37.536 "data_offset": 2048, 00:12:37.536 "data_size": 63488 00:12:37.536 }, 00:12:37.536 { 00:12:37.536 "name": "BaseBdev4", 00:12:37.536 "uuid": "85a56727-244d-4430-85af-f25660fba723", 00:12:37.536 "is_configured": true, 00:12:37.536 "data_offset": 2048, 00:12:37.536 "data_size": 63488 00:12:37.536 } 00:12:37.536 ] 00:12:37.536 }' 00:12:37.536 16:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.536 16:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.796 16:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.796 16:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:37.796 16:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.796 16:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.796 16:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.796 16:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:37.796 16:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.796 16:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:37.796 16:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.796 16:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.796 16:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.055 16:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6ff46224-5552-462f-9fc7-5214a0a55317 00:12:38.056 16:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.056 16:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.056 NewBaseBdev 00:12:38.056 [2024-12-06 16:28:19.657282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:38.056 [2024-12-06 16:28:19.657473] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:12:38.056 [2024-12-06 16:28:19.657485] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:38.056 [2024-12-06 16:28:19.657771] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:38.056 [2024-12-06 16:28:19.657883] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:12:38.056 [2024-12-06 16:28:19.657893] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:12:38.056 [2024-12-06 16:28:19.658006] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:38.056 16:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.056 16:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:38.056 16:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:38.056 16:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:38.056 16:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:38.056 16:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:38.056 16:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:38.056 16:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:38.056 16:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.056 16:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.056 16:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.056 16:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:38.056 16:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.056 16:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.056 [ 00:12:38.056 { 00:12:38.056 "name": "NewBaseBdev", 00:12:38.056 "aliases": [ 00:12:38.056 "6ff46224-5552-462f-9fc7-5214a0a55317" 00:12:38.056 ], 00:12:38.056 "product_name": "Malloc disk", 00:12:38.056 "block_size": 512, 00:12:38.056 "num_blocks": 65536, 00:12:38.056 "uuid": "6ff46224-5552-462f-9fc7-5214a0a55317", 00:12:38.056 "assigned_rate_limits": { 00:12:38.056 "rw_ios_per_sec": 0, 00:12:38.056 "rw_mbytes_per_sec": 0, 00:12:38.056 "r_mbytes_per_sec": 0, 00:12:38.056 "w_mbytes_per_sec": 0 00:12:38.056 }, 00:12:38.056 "claimed": true, 00:12:38.056 "claim_type": "exclusive_write", 00:12:38.056 "zoned": false, 00:12:38.056 "supported_io_types": { 00:12:38.056 "read": true, 00:12:38.056 "write": true, 00:12:38.056 "unmap": true, 00:12:38.056 "flush": true, 00:12:38.056 "reset": true, 00:12:38.056 "nvme_admin": false, 00:12:38.056 "nvme_io": false, 00:12:38.056 "nvme_io_md": false, 00:12:38.056 "write_zeroes": true, 00:12:38.056 "zcopy": true, 00:12:38.056 "get_zone_info": false, 00:12:38.056 "zone_management": false, 00:12:38.056 "zone_append": false, 00:12:38.056 "compare": false, 00:12:38.056 "compare_and_write": false, 00:12:38.056 "abort": true, 00:12:38.056 "seek_hole": false, 00:12:38.056 "seek_data": false, 00:12:38.056 "copy": true, 00:12:38.056 "nvme_iov_md": false 00:12:38.056 }, 00:12:38.056 "memory_domains": [ 00:12:38.056 { 00:12:38.056 "dma_device_id": "system", 00:12:38.056 "dma_device_type": 1 00:12:38.056 }, 00:12:38.056 { 00:12:38.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.056 "dma_device_type": 2 00:12:38.056 } 00:12:38.056 ], 00:12:38.056 "driver_specific": {} 00:12:38.056 } 00:12:38.056 ] 00:12:38.056 16:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.056 16:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:38.056 16:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:38.056 16:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:38.056 16:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:38.056 16:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:38.056 16:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:38.056 16:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:38.056 16:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.056 16:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.056 16:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.056 16:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.056 16:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.056 16:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:38.056 16:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.056 16:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.056 16:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.056 16:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.056 "name": "Existed_Raid", 00:12:38.056 "uuid": "68e5272e-a8c7-4e7b-bc9e-ad448a2d45cf", 00:12:38.056 "strip_size_kb": 64, 00:12:38.056 "state": "online", 00:12:38.056 "raid_level": "raid0", 00:12:38.056 "superblock": true, 00:12:38.056 "num_base_bdevs": 4, 00:12:38.056 "num_base_bdevs_discovered": 4, 00:12:38.056 "num_base_bdevs_operational": 4, 00:12:38.056 "base_bdevs_list": [ 00:12:38.056 { 00:12:38.056 "name": "NewBaseBdev", 00:12:38.056 "uuid": "6ff46224-5552-462f-9fc7-5214a0a55317", 00:12:38.056 "is_configured": true, 00:12:38.056 "data_offset": 2048, 00:12:38.056 "data_size": 63488 00:12:38.056 }, 00:12:38.056 { 00:12:38.056 "name": "BaseBdev2", 00:12:38.056 "uuid": "7f9cc932-43b8-4152-aaea-83650899dbc5", 00:12:38.056 "is_configured": true, 00:12:38.056 "data_offset": 2048, 00:12:38.056 "data_size": 63488 00:12:38.056 }, 00:12:38.056 { 00:12:38.056 "name": "BaseBdev3", 00:12:38.056 "uuid": "8f556cec-57c8-4058-9dd4-1809eb251cc5", 00:12:38.056 "is_configured": true, 00:12:38.056 "data_offset": 2048, 00:12:38.056 "data_size": 63488 00:12:38.056 }, 00:12:38.056 { 00:12:38.056 "name": "BaseBdev4", 00:12:38.056 "uuid": "85a56727-244d-4430-85af-f25660fba723", 00:12:38.056 "is_configured": true, 00:12:38.056 "data_offset": 2048, 00:12:38.056 "data_size": 63488 00:12:38.056 } 00:12:38.056 ] 00:12:38.056 }' 00:12:38.056 16:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.056 16:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.316 16:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:38.316 16:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:38.316 16:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:38.316 16:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:38.316 16:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:38.316 16:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:38.576 16:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:38.576 16:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:38.576 16:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.576 16:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.576 [2024-12-06 16:28:20.164911] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:38.576 16:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.577 16:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:38.577 "name": "Existed_Raid", 00:12:38.577 "aliases": [ 00:12:38.577 "68e5272e-a8c7-4e7b-bc9e-ad448a2d45cf" 00:12:38.577 ], 00:12:38.577 "product_name": "Raid Volume", 00:12:38.577 "block_size": 512, 00:12:38.577 "num_blocks": 253952, 00:12:38.577 "uuid": "68e5272e-a8c7-4e7b-bc9e-ad448a2d45cf", 00:12:38.577 "assigned_rate_limits": { 00:12:38.577 "rw_ios_per_sec": 0, 00:12:38.577 "rw_mbytes_per_sec": 0, 00:12:38.577 "r_mbytes_per_sec": 0, 00:12:38.577 "w_mbytes_per_sec": 0 00:12:38.577 }, 00:12:38.577 "claimed": false, 00:12:38.577 "zoned": false, 00:12:38.577 "supported_io_types": { 00:12:38.577 "read": true, 00:12:38.577 "write": true, 00:12:38.577 "unmap": true, 00:12:38.577 "flush": true, 00:12:38.577 "reset": true, 00:12:38.577 "nvme_admin": false, 00:12:38.577 "nvme_io": false, 00:12:38.577 "nvme_io_md": false, 00:12:38.577 "write_zeroes": true, 00:12:38.577 "zcopy": false, 00:12:38.577 "get_zone_info": false, 00:12:38.577 "zone_management": false, 00:12:38.577 "zone_append": false, 00:12:38.577 "compare": false, 00:12:38.577 "compare_and_write": false, 00:12:38.577 "abort": false, 00:12:38.577 "seek_hole": false, 00:12:38.577 "seek_data": false, 00:12:38.577 "copy": false, 00:12:38.577 "nvme_iov_md": false 00:12:38.577 }, 00:12:38.577 "memory_domains": [ 00:12:38.577 { 00:12:38.577 "dma_device_id": "system", 00:12:38.577 "dma_device_type": 1 00:12:38.577 }, 00:12:38.577 { 00:12:38.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.577 "dma_device_type": 2 00:12:38.577 }, 00:12:38.577 { 00:12:38.577 "dma_device_id": "system", 00:12:38.577 "dma_device_type": 1 00:12:38.577 }, 00:12:38.577 { 00:12:38.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.577 "dma_device_type": 2 00:12:38.577 }, 00:12:38.577 { 00:12:38.577 "dma_device_id": "system", 00:12:38.577 "dma_device_type": 1 00:12:38.577 }, 00:12:38.577 { 00:12:38.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.577 "dma_device_type": 2 00:12:38.577 }, 00:12:38.577 { 00:12:38.577 "dma_device_id": "system", 00:12:38.577 "dma_device_type": 1 00:12:38.577 }, 00:12:38.577 { 00:12:38.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.577 "dma_device_type": 2 00:12:38.577 } 00:12:38.577 ], 00:12:38.577 "driver_specific": { 00:12:38.577 "raid": { 00:12:38.577 "uuid": "68e5272e-a8c7-4e7b-bc9e-ad448a2d45cf", 00:12:38.577 "strip_size_kb": 64, 00:12:38.577 "state": "online", 00:12:38.577 "raid_level": "raid0", 00:12:38.577 "superblock": true, 00:12:38.577 "num_base_bdevs": 4, 00:12:38.577 "num_base_bdevs_discovered": 4, 00:12:38.577 "num_base_bdevs_operational": 4, 00:12:38.577 "base_bdevs_list": [ 00:12:38.577 { 00:12:38.577 "name": "NewBaseBdev", 00:12:38.577 "uuid": "6ff46224-5552-462f-9fc7-5214a0a55317", 00:12:38.577 "is_configured": true, 00:12:38.577 "data_offset": 2048, 00:12:38.577 "data_size": 63488 00:12:38.577 }, 00:12:38.577 { 00:12:38.577 "name": "BaseBdev2", 00:12:38.577 "uuid": "7f9cc932-43b8-4152-aaea-83650899dbc5", 00:12:38.577 "is_configured": true, 00:12:38.577 "data_offset": 2048, 00:12:38.577 "data_size": 63488 00:12:38.577 }, 00:12:38.577 { 00:12:38.577 "name": "BaseBdev3", 00:12:38.577 "uuid": "8f556cec-57c8-4058-9dd4-1809eb251cc5", 00:12:38.577 "is_configured": true, 00:12:38.577 "data_offset": 2048, 00:12:38.577 "data_size": 63488 00:12:38.577 }, 00:12:38.577 { 00:12:38.577 "name": "BaseBdev4", 00:12:38.577 "uuid": "85a56727-244d-4430-85af-f25660fba723", 00:12:38.577 "is_configured": true, 00:12:38.577 "data_offset": 2048, 00:12:38.577 "data_size": 63488 00:12:38.577 } 00:12:38.577 ] 00:12:38.577 } 00:12:38.577 } 00:12:38.577 }' 00:12:38.577 16:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:38.577 16:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:38.577 BaseBdev2 00:12:38.577 BaseBdev3 00:12:38.577 BaseBdev4' 00:12:38.577 16:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:38.577 16:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:38.577 16:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:38.577 16:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:38.577 16:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:38.577 16:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.577 16:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.577 16:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.577 16:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:38.577 16:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:38.577 16:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:38.577 16:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:38.577 16:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:38.577 16:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.577 16:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.577 16:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.577 16:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:38.577 16:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:38.577 16:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:38.577 16:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:38.577 16:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:38.577 16:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.577 16:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.577 16:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.837 16:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:38.837 16:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:38.837 16:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:38.837 16:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:38.837 16:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.837 16:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:38.837 16:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.837 16:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.837 16:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:38.837 16:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:38.837 16:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:38.837 16:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.837 16:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.837 [2024-12-06 16:28:20.491971] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:38.837 [2024-12-06 16:28:20.492068] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:38.837 [2024-12-06 16:28:20.492190] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:38.837 [2024-12-06 16:28:20.492311] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:38.837 [2024-12-06 16:28:20.492363] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:12:38.837 16:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.837 16:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 81403 00:12:38.837 16:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 81403 ']' 00:12:38.837 16:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 81403 00:12:38.837 16:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:38.837 16:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:38.837 16:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81403 00:12:38.837 16:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:38.837 16:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:38.837 killing process with pid 81403 00:12:38.837 16:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81403' 00:12:38.837 16:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 81403 00:12:38.837 [2024-12-06 16:28:20.534905] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:38.837 16:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 81403 00:12:38.837 [2024-12-06 16:28:20.576853] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:39.097 16:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:39.097 00:12:39.097 real 0m9.753s 00:12:39.097 user 0m16.813s 00:12:39.097 sys 0m1.971s 00:12:39.097 16:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:39.097 16:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.097 ************************************ 00:12:39.097 END TEST raid_state_function_test_sb 00:12:39.097 ************************************ 00:12:39.097 16:28:20 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:12:39.097 16:28:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:39.097 16:28:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:39.097 16:28:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:39.097 ************************************ 00:12:39.097 START TEST raid_superblock_test 00:12:39.097 ************************************ 00:12:39.097 16:28:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:12:39.097 16:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:12:39.097 16:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:39.097 16:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:39.097 16:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:39.097 16:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:39.097 16:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:39.097 16:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:39.097 16:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:39.097 16:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:39.097 16:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:39.097 16:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:39.097 16:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:39.097 16:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:39.097 16:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:12:39.097 16:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:39.097 16:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:39.097 16:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=82051 00:12:39.097 16:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:39.097 16:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 82051 00:12:39.097 16:28:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 82051 ']' 00:12:39.097 16:28:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.097 16:28:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:39.097 16:28:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.097 16:28:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:39.097 16:28:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.356 [2024-12-06 16:28:20.948809] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:12:39.356 [2024-12-06 16:28:20.949021] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82051 ] 00:12:39.356 [2024-12-06 16:28:21.121175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.356 [2024-12-06 16:28:21.150428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.615 [2024-12-06 16:28:21.193379] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:39.615 [2024-12-06 16:28:21.193418] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:40.186 16:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:40.186 16:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:40.186 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:40.186 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:40.186 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:40.186 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:40.186 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:40.186 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:40.186 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:40.186 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:40.186 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:40.186 16:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.186 16:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.186 malloc1 00:12:40.186 16:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.186 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:40.186 16:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.186 16:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.186 [2024-12-06 16:28:21.841192] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:40.186 [2024-12-06 16:28:21.841324] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.186 [2024-12-06 16:28:21.841382] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:40.186 [2024-12-06 16:28:21.841420] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.186 [2024-12-06 16:28:21.843775] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.186 [2024-12-06 16:28:21.843852] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:40.186 pt1 00:12:40.186 16:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.186 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:40.186 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:40.186 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:40.186 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:40.186 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:40.186 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:40.186 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:40.186 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:40.186 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:40.186 16:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.187 malloc2 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.187 [2024-12-06 16:28:21.874043] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:40.187 [2024-12-06 16:28:21.874170] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.187 [2024-12-06 16:28:21.874217] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:40.187 [2024-12-06 16:28:21.874252] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.187 [2024-12-06 16:28:21.876594] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.187 [2024-12-06 16:28:21.876688] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:40.187 pt2 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.187 malloc3 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.187 [2024-12-06 16:28:21.902976] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:40.187 [2024-12-06 16:28:21.903115] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.187 [2024-12-06 16:28:21.903155] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:40.187 [2024-12-06 16:28:21.903187] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.187 [2024-12-06 16:28:21.905504] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.187 [2024-12-06 16:28:21.905575] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:40.187 pt3 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.187 malloc4 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.187 [2024-12-06 16:28:21.946863] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:40.187 [2024-12-06 16:28:21.946984] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.187 [2024-12-06 16:28:21.947030] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:40.187 [2024-12-06 16:28:21.947081] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.187 [2024-12-06 16:28:21.949543] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.187 [2024-12-06 16:28:21.949639] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:40.187 pt4 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.187 [2024-12-06 16:28:21.958944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:40.187 [2024-12-06 16:28:21.961103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:40.187 [2024-12-06 16:28:21.961269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:40.187 [2024-12-06 16:28:21.961337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:40.187 [2024-12-06 16:28:21.961519] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:12:40.187 [2024-12-06 16:28:21.961541] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:40.187 [2024-12-06 16:28:21.961851] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:40.187 [2024-12-06 16:28:21.962017] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:12:40.187 [2024-12-06 16:28:21.962029] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:12:40.187 [2024-12-06 16:28:21.962194] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.187 16:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.187 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.187 "name": "raid_bdev1", 00:12:40.187 "uuid": "6a834676-43cb-4784-bd02-f7404df1a605", 00:12:40.187 "strip_size_kb": 64, 00:12:40.187 "state": "online", 00:12:40.187 "raid_level": "raid0", 00:12:40.187 "superblock": true, 00:12:40.187 "num_base_bdevs": 4, 00:12:40.187 "num_base_bdevs_discovered": 4, 00:12:40.187 "num_base_bdevs_operational": 4, 00:12:40.187 "base_bdevs_list": [ 00:12:40.187 { 00:12:40.187 "name": "pt1", 00:12:40.187 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:40.187 "is_configured": true, 00:12:40.187 "data_offset": 2048, 00:12:40.187 "data_size": 63488 00:12:40.187 }, 00:12:40.187 { 00:12:40.187 "name": "pt2", 00:12:40.187 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:40.187 "is_configured": true, 00:12:40.187 "data_offset": 2048, 00:12:40.187 "data_size": 63488 00:12:40.187 }, 00:12:40.187 { 00:12:40.187 "name": "pt3", 00:12:40.187 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:40.187 "is_configured": true, 00:12:40.187 "data_offset": 2048, 00:12:40.187 "data_size": 63488 00:12:40.187 }, 00:12:40.187 { 00:12:40.187 "name": "pt4", 00:12:40.187 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:40.187 "is_configured": true, 00:12:40.187 "data_offset": 2048, 00:12:40.187 "data_size": 63488 00:12:40.187 } 00:12:40.187 ] 00:12:40.187 }' 00:12:40.187 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.188 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.756 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:40.757 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:40.757 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:40.757 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:40.757 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:40.757 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:40.757 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:40.757 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:40.757 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.757 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.757 [2024-12-06 16:28:22.442508] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:40.757 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.757 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:40.757 "name": "raid_bdev1", 00:12:40.757 "aliases": [ 00:12:40.757 "6a834676-43cb-4784-bd02-f7404df1a605" 00:12:40.757 ], 00:12:40.757 "product_name": "Raid Volume", 00:12:40.757 "block_size": 512, 00:12:40.757 "num_blocks": 253952, 00:12:40.757 "uuid": "6a834676-43cb-4784-bd02-f7404df1a605", 00:12:40.757 "assigned_rate_limits": { 00:12:40.757 "rw_ios_per_sec": 0, 00:12:40.757 "rw_mbytes_per_sec": 0, 00:12:40.757 "r_mbytes_per_sec": 0, 00:12:40.757 "w_mbytes_per_sec": 0 00:12:40.757 }, 00:12:40.757 "claimed": false, 00:12:40.757 "zoned": false, 00:12:40.757 "supported_io_types": { 00:12:40.757 "read": true, 00:12:40.757 "write": true, 00:12:40.757 "unmap": true, 00:12:40.757 "flush": true, 00:12:40.757 "reset": true, 00:12:40.757 "nvme_admin": false, 00:12:40.757 "nvme_io": false, 00:12:40.757 "nvme_io_md": false, 00:12:40.757 "write_zeroes": true, 00:12:40.757 "zcopy": false, 00:12:40.757 "get_zone_info": false, 00:12:40.757 "zone_management": false, 00:12:40.757 "zone_append": false, 00:12:40.757 "compare": false, 00:12:40.757 "compare_and_write": false, 00:12:40.757 "abort": false, 00:12:40.757 "seek_hole": false, 00:12:40.757 "seek_data": false, 00:12:40.757 "copy": false, 00:12:40.757 "nvme_iov_md": false 00:12:40.757 }, 00:12:40.757 "memory_domains": [ 00:12:40.757 { 00:12:40.757 "dma_device_id": "system", 00:12:40.757 "dma_device_type": 1 00:12:40.757 }, 00:12:40.757 { 00:12:40.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.757 "dma_device_type": 2 00:12:40.757 }, 00:12:40.757 { 00:12:40.757 "dma_device_id": "system", 00:12:40.757 "dma_device_type": 1 00:12:40.757 }, 00:12:40.757 { 00:12:40.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.757 "dma_device_type": 2 00:12:40.757 }, 00:12:40.757 { 00:12:40.757 "dma_device_id": "system", 00:12:40.757 "dma_device_type": 1 00:12:40.757 }, 00:12:40.757 { 00:12:40.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.757 "dma_device_type": 2 00:12:40.757 }, 00:12:40.757 { 00:12:40.757 "dma_device_id": "system", 00:12:40.757 "dma_device_type": 1 00:12:40.757 }, 00:12:40.757 { 00:12:40.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.757 "dma_device_type": 2 00:12:40.757 } 00:12:40.757 ], 00:12:40.757 "driver_specific": { 00:12:40.757 "raid": { 00:12:40.757 "uuid": "6a834676-43cb-4784-bd02-f7404df1a605", 00:12:40.757 "strip_size_kb": 64, 00:12:40.757 "state": "online", 00:12:40.757 "raid_level": "raid0", 00:12:40.757 "superblock": true, 00:12:40.757 "num_base_bdevs": 4, 00:12:40.757 "num_base_bdevs_discovered": 4, 00:12:40.757 "num_base_bdevs_operational": 4, 00:12:40.757 "base_bdevs_list": [ 00:12:40.757 { 00:12:40.757 "name": "pt1", 00:12:40.757 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:40.757 "is_configured": true, 00:12:40.757 "data_offset": 2048, 00:12:40.757 "data_size": 63488 00:12:40.757 }, 00:12:40.757 { 00:12:40.757 "name": "pt2", 00:12:40.757 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:40.757 "is_configured": true, 00:12:40.757 "data_offset": 2048, 00:12:40.757 "data_size": 63488 00:12:40.757 }, 00:12:40.757 { 00:12:40.757 "name": "pt3", 00:12:40.757 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:40.757 "is_configured": true, 00:12:40.757 "data_offset": 2048, 00:12:40.757 "data_size": 63488 00:12:40.757 }, 00:12:40.757 { 00:12:40.757 "name": "pt4", 00:12:40.757 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:40.757 "is_configured": true, 00:12:40.757 "data_offset": 2048, 00:12:40.757 "data_size": 63488 00:12:40.757 } 00:12:40.757 ] 00:12:40.757 } 00:12:40.757 } 00:12:40.757 }' 00:12:40.757 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:40.757 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:40.757 pt2 00:12:40.757 pt3 00:12:40.757 pt4' 00:12:40.757 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.757 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:40.757 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:40.757 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:40.757 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.757 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.757 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.017 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.017 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:41.017 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:41.017 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:41.017 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:41.017 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.017 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.017 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:41.017 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.017 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:41.017 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:41.017 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:41.017 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:41.017 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:41.017 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.017 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.017 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.017 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:41.017 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:41.017 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:41.017 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:41.017 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:41.017 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.017 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.017 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.017 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:41.017 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:41.017 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:41.017 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.017 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.017 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:41.017 [2024-12-06 16:28:22.773813] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:41.017 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.017 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6a834676-43cb-4784-bd02-f7404df1a605 00:12:41.017 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 6a834676-43cb-4784-bd02-f7404df1a605 ']' 00:12:41.017 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:41.017 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.017 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.018 [2024-12-06 16:28:22.821421] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:41.018 [2024-12-06 16:28:22.821524] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:41.018 [2024-12-06 16:28:22.821675] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:41.018 [2024-12-06 16:28:22.821814] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:41.018 [2024-12-06 16:28:22.821867] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:12:41.018 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.018 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.018 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:41.018 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.018 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.018 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.278 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:41.278 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:41.278 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:41.278 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:41.278 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.278 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.278 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.278 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:41.278 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:41.278 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.278 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.278 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.278 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:41.278 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:41.278 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.278 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.278 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.278 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:41.278 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:41.278 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.278 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.278 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.278 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:41.278 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:41.278 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.278 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.278 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.278 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:41.278 16:28:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:41.278 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:41.278 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:41.278 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:41.278 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:41.278 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:41.278 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:41.278 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:41.278 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.278 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.278 [2024-12-06 16:28:22.989177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:41.278 [2024-12-06 16:28:22.991260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:41.278 [2024-12-06 16:28:22.991316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:41.278 [2024-12-06 16:28:22.991346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:41.278 [2024-12-06 16:28:22.991397] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:41.278 [2024-12-06 16:28:22.991446] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:41.278 [2024-12-06 16:28:22.991466] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:41.278 [2024-12-06 16:28:22.991483] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:41.278 [2024-12-06 16:28:22.991505] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:41.278 [2024-12-06 16:28:22.991515] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:12:41.278 request: 00:12:41.278 { 00:12:41.278 "name": "raid_bdev1", 00:12:41.278 "raid_level": "raid0", 00:12:41.278 "base_bdevs": [ 00:12:41.278 "malloc1", 00:12:41.278 "malloc2", 00:12:41.278 "malloc3", 00:12:41.278 "malloc4" 00:12:41.278 ], 00:12:41.278 "strip_size_kb": 64, 00:12:41.278 "superblock": false, 00:12:41.278 "method": "bdev_raid_create", 00:12:41.278 "req_id": 1 00:12:41.278 } 00:12:41.278 Got JSON-RPC error response 00:12:41.278 response: 00:12:41.278 { 00:12:41.278 "code": -17, 00:12:41.278 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:41.278 } 00:12:41.278 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:41.278 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:41.278 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:41.278 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:41.278 16:28:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:41.278 16:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.278 16:28:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.278 16:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:41.278 16:28:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.278 16:28:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.278 16:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:41.278 16:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:41.278 16:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:41.278 16:28:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.278 16:28:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.278 [2024-12-06 16:28:23.068976] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:41.278 [2024-12-06 16:28:23.069050] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.278 [2024-12-06 16:28:23.069077] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:41.278 [2024-12-06 16:28:23.069088] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.278 [2024-12-06 16:28:23.071563] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.278 [2024-12-06 16:28:23.071609] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:41.278 [2024-12-06 16:28:23.071743] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:41.278 [2024-12-06 16:28:23.071796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:41.278 pt1 00:12:41.278 16:28:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.278 16:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:12:41.278 16:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:41.278 16:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:41.278 16:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:41.278 16:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:41.279 16:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:41.279 16:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.279 16:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.279 16:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.279 16:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.279 16:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.279 16:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.279 16:28:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.279 16:28:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.279 16:28:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.539 16:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.539 "name": "raid_bdev1", 00:12:41.539 "uuid": "6a834676-43cb-4784-bd02-f7404df1a605", 00:12:41.539 "strip_size_kb": 64, 00:12:41.539 "state": "configuring", 00:12:41.539 "raid_level": "raid0", 00:12:41.539 "superblock": true, 00:12:41.539 "num_base_bdevs": 4, 00:12:41.539 "num_base_bdevs_discovered": 1, 00:12:41.539 "num_base_bdevs_operational": 4, 00:12:41.539 "base_bdevs_list": [ 00:12:41.539 { 00:12:41.539 "name": "pt1", 00:12:41.539 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:41.539 "is_configured": true, 00:12:41.539 "data_offset": 2048, 00:12:41.539 "data_size": 63488 00:12:41.539 }, 00:12:41.539 { 00:12:41.539 "name": null, 00:12:41.539 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:41.539 "is_configured": false, 00:12:41.539 "data_offset": 2048, 00:12:41.539 "data_size": 63488 00:12:41.539 }, 00:12:41.539 { 00:12:41.539 "name": null, 00:12:41.539 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:41.539 "is_configured": false, 00:12:41.539 "data_offset": 2048, 00:12:41.539 "data_size": 63488 00:12:41.539 }, 00:12:41.539 { 00:12:41.539 "name": null, 00:12:41.539 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:41.539 "is_configured": false, 00:12:41.539 "data_offset": 2048, 00:12:41.539 "data_size": 63488 00:12:41.539 } 00:12:41.539 ] 00:12:41.539 }' 00:12:41.539 16:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.539 16:28:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.799 16:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:41.799 16:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:41.799 16:28:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.799 16:28:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.799 [2024-12-06 16:28:23.532213] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:41.799 [2024-12-06 16:28:23.532369] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.799 [2024-12-06 16:28:23.532400] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:41.799 [2024-12-06 16:28:23.532411] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.799 [2024-12-06 16:28:23.532898] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.799 [2024-12-06 16:28:23.532917] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:41.799 [2024-12-06 16:28:23.533005] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:41.799 [2024-12-06 16:28:23.533029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:41.799 pt2 00:12:41.799 16:28:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.799 16:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:41.799 16:28:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.799 16:28:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.799 [2024-12-06 16:28:23.540198] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:41.799 16:28:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.799 16:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:12:41.799 16:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:41.799 16:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:41.799 16:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:41.799 16:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:41.799 16:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:41.799 16:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.799 16:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.799 16:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.799 16:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.799 16:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.799 16:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.799 16:28:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.799 16:28:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.799 16:28:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.799 16:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.799 "name": "raid_bdev1", 00:12:41.799 "uuid": "6a834676-43cb-4784-bd02-f7404df1a605", 00:12:41.799 "strip_size_kb": 64, 00:12:41.799 "state": "configuring", 00:12:41.799 "raid_level": "raid0", 00:12:41.799 "superblock": true, 00:12:41.799 "num_base_bdevs": 4, 00:12:41.799 "num_base_bdevs_discovered": 1, 00:12:41.799 "num_base_bdevs_operational": 4, 00:12:41.799 "base_bdevs_list": [ 00:12:41.799 { 00:12:41.799 "name": "pt1", 00:12:41.799 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:41.799 "is_configured": true, 00:12:41.799 "data_offset": 2048, 00:12:41.799 "data_size": 63488 00:12:41.799 }, 00:12:41.799 { 00:12:41.799 "name": null, 00:12:41.799 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:41.799 "is_configured": false, 00:12:41.799 "data_offset": 0, 00:12:41.799 "data_size": 63488 00:12:41.799 }, 00:12:41.799 { 00:12:41.799 "name": null, 00:12:41.799 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:41.799 "is_configured": false, 00:12:41.799 "data_offset": 2048, 00:12:41.799 "data_size": 63488 00:12:41.799 }, 00:12:41.799 { 00:12:41.799 "name": null, 00:12:41.799 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:41.799 "is_configured": false, 00:12:41.799 "data_offset": 2048, 00:12:41.799 "data_size": 63488 00:12:41.799 } 00:12:41.799 ] 00:12:41.799 }' 00:12:41.799 16:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.799 16:28:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.369 16:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:42.369 16:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:42.369 16:28:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:42.369 16:28:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.369 16:28:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.369 [2024-12-06 16:28:23.999431] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:42.369 [2024-12-06 16:28:23.999531] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:42.369 [2024-12-06 16:28:23.999552] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:42.369 [2024-12-06 16:28:23.999581] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:42.369 [2024-12-06 16:28:24.000033] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:42.369 [2024-12-06 16:28:24.000057] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:42.369 [2024-12-06 16:28:24.000141] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:42.369 [2024-12-06 16:28:24.000168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:42.369 pt2 00:12:42.369 16:28:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.369 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:42.369 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:42.369 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:42.369 16:28:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.369 16:28:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.369 [2024-12-06 16:28:24.011380] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:42.369 [2024-12-06 16:28:24.011501] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:42.370 [2024-12-06 16:28:24.011530] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:42.370 [2024-12-06 16:28:24.011545] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:42.370 [2024-12-06 16:28:24.011975] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:42.370 [2024-12-06 16:28:24.011997] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:42.370 [2024-12-06 16:28:24.012069] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:42.370 [2024-12-06 16:28:24.012103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:42.370 pt3 00:12:42.370 16:28:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.370 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:42.370 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:42.370 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:42.370 16:28:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.370 16:28:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.370 [2024-12-06 16:28:24.023344] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:42.370 [2024-12-06 16:28:24.023418] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:42.370 [2024-12-06 16:28:24.023436] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:42.370 [2024-12-06 16:28:24.023446] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:42.370 [2024-12-06 16:28:24.023864] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:42.370 [2024-12-06 16:28:24.023886] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:42.370 [2024-12-06 16:28:24.023956] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:42.370 [2024-12-06 16:28:24.023982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:42.370 [2024-12-06 16:28:24.024096] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:12:42.370 [2024-12-06 16:28:24.024111] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:42.370 [2024-12-06 16:28:24.024408] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:42.370 [2024-12-06 16:28:24.024566] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:12:42.370 [2024-12-06 16:28:24.024584] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:12:42.370 [2024-12-06 16:28:24.024716] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:42.370 pt4 00:12:42.370 16:28:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.370 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:42.370 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:42.370 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:42.370 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:42.370 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:42.370 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:42.370 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:42.370 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:42.370 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.370 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.370 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.370 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.370 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.370 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.370 16:28:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.370 16:28:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.370 16:28:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.370 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.370 "name": "raid_bdev1", 00:12:42.370 "uuid": "6a834676-43cb-4784-bd02-f7404df1a605", 00:12:42.370 "strip_size_kb": 64, 00:12:42.370 "state": "online", 00:12:42.370 "raid_level": "raid0", 00:12:42.370 "superblock": true, 00:12:42.370 "num_base_bdevs": 4, 00:12:42.370 "num_base_bdevs_discovered": 4, 00:12:42.370 "num_base_bdevs_operational": 4, 00:12:42.370 "base_bdevs_list": [ 00:12:42.370 { 00:12:42.370 "name": "pt1", 00:12:42.370 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:42.370 "is_configured": true, 00:12:42.370 "data_offset": 2048, 00:12:42.370 "data_size": 63488 00:12:42.370 }, 00:12:42.370 { 00:12:42.370 "name": "pt2", 00:12:42.370 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:42.370 "is_configured": true, 00:12:42.370 "data_offset": 2048, 00:12:42.370 "data_size": 63488 00:12:42.370 }, 00:12:42.370 { 00:12:42.370 "name": "pt3", 00:12:42.370 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:42.370 "is_configured": true, 00:12:42.370 "data_offset": 2048, 00:12:42.370 "data_size": 63488 00:12:42.370 }, 00:12:42.370 { 00:12:42.370 "name": "pt4", 00:12:42.370 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:42.370 "is_configured": true, 00:12:42.370 "data_offset": 2048, 00:12:42.370 "data_size": 63488 00:12:42.370 } 00:12:42.370 ] 00:12:42.370 }' 00:12:42.370 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.370 16:28:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.629 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:42.629 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:42.629 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:42.629 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:42.629 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:42.629 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:42.629 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:42.629 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:42.629 16:28:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.629 16:28:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.889 [2024-12-06 16:28:24.467000] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:42.889 16:28:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.889 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:42.889 "name": "raid_bdev1", 00:12:42.889 "aliases": [ 00:12:42.889 "6a834676-43cb-4784-bd02-f7404df1a605" 00:12:42.889 ], 00:12:42.889 "product_name": "Raid Volume", 00:12:42.889 "block_size": 512, 00:12:42.889 "num_blocks": 253952, 00:12:42.889 "uuid": "6a834676-43cb-4784-bd02-f7404df1a605", 00:12:42.889 "assigned_rate_limits": { 00:12:42.889 "rw_ios_per_sec": 0, 00:12:42.889 "rw_mbytes_per_sec": 0, 00:12:42.889 "r_mbytes_per_sec": 0, 00:12:42.889 "w_mbytes_per_sec": 0 00:12:42.889 }, 00:12:42.889 "claimed": false, 00:12:42.889 "zoned": false, 00:12:42.889 "supported_io_types": { 00:12:42.889 "read": true, 00:12:42.889 "write": true, 00:12:42.889 "unmap": true, 00:12:42.889 "flush": true, 00:12:42.889 "reset": true, 00:12:42.889 "nvme_admin": false, 00:12:42.889 "nvme_io": false, 00:12:42.889 "nvme_io_md": false, 00:12:42.889 "write_zeroes": true, 00:12:42.889 "zcopy": false, 00:12:42.889 "get_zone_info": false, 00:12:42.889 "zone_management": false, 00:12:42.889 "zone_append": false, 00:12:42.889 "compare": false, 00:12:42.889 "compare_and_write": false, 00:12:42.889 "abort": false, 00:12:42.889 "seek_hole": false, 00:12:42.889 "seek_data": false, 00:12:42.889 "copy": false, 00:12:42.889 "nvme_iov_md": false 00:12:42.889 }, 00:12:42.889 "memory_domains": [ 00:12:42.889 { 00:12:42.889 "dma_device_id": "system", 00:12:42.889 "dma_device_type": 1 00:12:42.889 }, 00:12:42.889 { 00:12:42.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.889 "dma_device_type": 2 00:12:42.889 }, 00:12:42.889 { 00:12:42.889 "dma_device_id": "system", 00:12:42.889 "dma_device_type": 1 00:12:42.889 }, 00:12:42.889 { 00:12:42.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.889 "dma_device_type": 2 00:12:42.889 }, 00:12:42.889 { 00:12:42.889 "dma_device_id": "system", 00:12:42.889 "dma_device_type": 1 00:12:42.889 }, 00:12:42.889 { 00:12:42.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.889 "dma_device_type": 2 00:12:42.889 }, 00:12:42.889 { 00:12:42.889 "dma_device_id": "system", 00:12:42.889 "dma_device_type": 1 00:12:42.889 }, 00:12:42.889 { 00:12:42.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.889 "dma_device_type": 2 00:12:42.889 } 00:12:42.889 ], 00:12:42.889 "driver_specific": { 00:12:42.889 "raid": { 00:12:42.889 "uuid": "6a834676-43cb-4784-bd02-f7404df1a605", 00:12:42.889 "strip_size_kb": 64, 00:12:42.889 "state": "online", 00:12:42.889 "raid_level": "raid0", 00:12:42.889 "superblock": true, 00:12:42.889 "num_base_bdevs": 4, 00:12:42.889 "num_base_bdevs_discovered": 4, 00:12:42.889 "num_base_bdevs_operational": 4, 00:12:42.889 "base_bdevs_list": [ 00:12:42.889 { 00:12:42.889 "name": "pt1", 00:12:42.889 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:42.889 "is_configured": true, 00:12:42.889 "data_offset": 2048, 00:12:42.889 "data_size": 63488 00:12:42.889 }, 00:12:42.889 { 00:12:42.889 "name": "pt2", 00:12:42.889 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:42.889 "is_configured": true, 00:12:42.889 "data_offset": 2048, 00:12:42.889 "data_size": 63488 00:12:42.889 }, 00:12:42.889 { 00:12:42.889 "name": "pt3", 00:12:42.889 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:42.889 "is_configured": true, 00:12:42.889 "data_offset": 2048, 00:12:42.889 "data_size": 63488 00:12:42.889 }, 00:12:42.889 { 00:12:42.889 "name": "pt4", 00:12:42.889 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:42.889 "is_configured": true, 00:12:42.889 "data_offset": 2048, 00:12:42.889 "data_size": 63488 00:12:42.889 } 00:12:42.889 ] 00:12:42.889 } 00:12:42.889 } 00:12:42.889 }' 00:12:42.889 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:42.889 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:42.889 pt2 00:12:42.889 pt3 00:12:42.889 pt4' 00:12:42.889 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.889 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:42.889 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:42.889 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:42.889 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.889 16:28:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.889 16:28:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.889 16:28:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.889 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:42.889 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:42.889 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:42.889 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:42.889 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.889 16:28:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.889 16:28:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.889 16:28:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.889 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:42.889 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:42.889 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:42.889 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:42.889 16:28:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.889 16:28:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.889 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.889 16:28:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.149 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:43.149 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:43.149 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:43.149 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:43.149 16:28:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.149 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:43.149 16:28:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.149 16:28:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.149 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:43.149 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:43.149 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:43.149 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:43.149 16:28:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.149 16:28:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.149 [2024-12-06 16:28:24.782411] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:43.149 16:28:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.149 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 6a834676-43cb-4784-bd02-f7404df1a605 '!=' 6a834676-43cb-4784-bd02-f7404df1a605 ']' 00:12:43.149 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:12:43.149 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:43.149 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:43.149 16:28:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 82051 00:12:43.149 16:28:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 82051 ']' 00:12:43.149 16:28:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 82051 00:12:43.149 16:28:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:43.149 16:28:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:43.149 16:28:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82051 00:12:43.149 16:28:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:43.149 16:28:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:43.149 16:28:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82051' 00:12:43.149 killing process with pid 82051 00:12:43.149 16:28:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 82051 00:12:43.149 [2024-12-06 16:28:24.867601] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:43.149 [2024-12-06 16:28:24.867753] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:43.149 16:28:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 82051 00:12:43.149 [2024-12-06 16:28:24.867871] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:43.149 [2024-12-06 16:28:24.867926] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:12:43.149 [2024-12-06 16:28:24.912435] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:43.409 16:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:43.409 00:12:43.409 real 0m4.266s 00:12:43.409 user 0m6.763s 00:12:43.409 sys 0m0.969s 00:12:43.409 16:28:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:43.409 16:28:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.409 ************************************ 00:12:43.409 END TEST raid_superblock_test 00:12:43.409 ************************************ 00:12:43.409 16:28:25 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:12:43.409 16:28:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:43.409 16:28:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:43.409 16:28:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:43.409 ************************************ 00:12:43.409 START TEST raid_read_error_test 00:12:43.409 ************************************ 00:12:43.409 16:28:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:12:43.409 16:28:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:12:43.409 16:28:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:43.410 16:28:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:43.410 16:28:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:43.410 16:28:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:43.410 16:28:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:43.410 16:28:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:43.410 16:28:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:43.410 16:28:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:43.410 16:28:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:43.410 16:28:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:43.410 16:28:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:43.410 16:28:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:43.410 16:28:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:43.410 16:28:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:43.410 16:28:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:43.410 16:28:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:43.410 16:28:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:43.410 16:28:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:43.410 16:28:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:43.410 16:28:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:43.410 16:28:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:43.410 16:28:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:43.410 16:28:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:43.410 16:28:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:12:43.410 16:28:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:43.410 16:28:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:43.410 16:28:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:43.410 16:28:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.QMVOeE8Qkg 00:12:43.410 16:28:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=82305 00:12:43.410 16:28:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:43.410 16:28:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 82305 00:12:43.410 16:28:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 82305 ']' 00:12:43.410 16:28:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:43.410 16:28:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:43.410 16:28:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:43.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:43.410 16:28:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:43.410 16:28:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.674 [2024-12-06 16:28:25.297766] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:12:43.674 [2024-12-06 16:28:25.297934] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82305 ] 00:12:43.674 [2024-12-06 16:28:25.473178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.674 [2024-12-06 16:28:25.499472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.939 [2024-12-06 16:28:25.542098] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:43.939 [2024-12-06 16:28:25.542251] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.506 BaseBdev1_malloc 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.506 true 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.506 [2024-12-06 16:28:26.201542] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:44.506 [2024-12-06 16:28:26.201601] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.506 [2024-12-06 16:28:26.201625] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:44.506 [2024-12-06 16:28:26.201635] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.506 [2024-12-06 16:28:26.203876] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.506 [2024-12-06 16:28:26.203987] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:44.506 BaseBdev1 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.506 BaseBdev2_malloc 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.506 true 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.506 [2024-12-06 16:28:26.242279] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:44.506 [2024-12-06 16:28:26.242398] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.506 [2024-12-06 16:28:26.242430] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:44.506 [2024-12-06 16:28:26.242443] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.506 [2024-12-06 16:28:26.245145] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.506 [2024-12-06 16:28:26.245189] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:44.506 BaseBdev2 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.506 BaseBdev3_malloc 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.506 true 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.506 [2024-12-06 16:28:26.283514] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:44.506 [2024-12-06 16:28:26.283593] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.506 [2024-12-06 16:28:26.283620] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:44.506 [2024-12-06 16:28:26.283632] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.506 [2024-12-06 16:28:26.286192] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.506 [2024-12-06 16:28:26.286245] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:44.506 BaseBdev3 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.506 BaseBdev4_malloc 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.506 true 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.506 [2024-12-06 16:28:26.336048] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:44.506 [2024-12-06 16:28:26.336114] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.506 [2024-12-06 16:28:26.336146] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:44.506 [2024-12-06 16:28:26.336157] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.506 [2024-12-06 16:28:26.338584] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.506 [2024-12-06 16:28:26.338623] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:44.506 BaseBdev4 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.506 16:28:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.766 [2024-12-06 16:28:26.348105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:44.766 [2024-12-06 16:28:26.350162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:44.766 [2024-12-06 16:28:26.350275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:44.766 [2024-12-06 16:28:26.350332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:44.766 [2024-12-06 16:28:26.350545] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:12:44.766 [2024-12-06 16:28:26.350564] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:44.766 [2024-12-06 16:28:26.350862] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:12:44.766 [2024-12-06 16:28:26.351028] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:12:44.766 [2024-12-06 16:28:26.351049] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:12:44.766 [2024-12-06 16:28:26.351219] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:44.766 16:28:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.766 16:28:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:44.766 16:28:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:44.766 16:28:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:44.766 16:28:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:44.766 16:28:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:44.766 16:28:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:44.766 16:28:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.766 16:28:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.766 16:28:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.766 16:28:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.766 16:28:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.766 16:28:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.766 16:28:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.766 16:28:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.766 16:28:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.766 16:28:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.766 "name": "raid_bdev1", 00:12:44.766 "uuid": "d3fd3e26-5988-4c2e-ad55-f28708854d6b", 00:12:44.766 "strip_size_kb": 64, 00:12:44.766 "state": "online", 00:12:44.766 "raid_level": "raid0", 00:12:44.766 "superblock": true, 00:12:44.766 "num_base_bdevs": 4, 00:12:44.766 "num_base_bdevs_discovered": 4, 00:12:44.766 "num_base_bdevs_operational": 4, 00:12:44.766 "base_bdevs_list": [ 00:12:44.766 { 00:12:44.766 "name": "BaseBdev1", 00:12:44.766 "uuid": "6a552f68-c308-5547-b635-83891ae2744a", 00:12:44.766 "is_configured": true, 00:12:44.766 "data_offset": 2048, 00:12:44.766 "data_size": 63488 00:12:44.766 }, 00:12:44.766 { 00:12:44.766 "name": "BaseBdev2", 00:12:44.766 "uuid": "35ecd48d-a5c0-59ce-935e-1baee24abd14", 00:12:44.766 "is_configured": true, 00:12:44.766 "data_offset": 2048, 00:12:44.766 "data_size": 63488 00:12:44.766 }, 00:12:44.766 { 00:12:44.766 "name": "BaseBdev3", 00:12:44.766 "uuid": "a071aab6-e714-5bf4-a70f-e7ebe2763199", 00:12:44.766 "is_configured": true, 00:12:44.766 "data_offset": 2048, 00:12:44.766 "data_size": 63488 00:12:44.766 }, 00:12:44.766 { 00:12:44.766 "name": "BaseBdev4", 00:12:44.766 "uuid": "c2fd12fa-4366-5761-b1cf-a96bee158829", 00:12:44.766 "is_configured": true, 00:12:44.766 "data_offset": 2048, 00:12:44.766 "data_size": 63488 00:12:44.766 } 00:12:44.766 ] 00:12:44.766 }' 00:12:44.766 16:28:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.766 16:28:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.025 16:28:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:45.025 16:28:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:45.284 [2024-12-06 16:28:26.947483] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:46.220 16:28:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:46.220 16:28:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.220 16:28:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.220 16:28:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.220 16:28:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:46.220 16:28:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:46.220 16:28:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:46.220 16:28:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:46.220 16:28:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:46.220 16:28:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:46.220 16:28:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:46.220 16:28:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:46.220 16:28:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:46.220 16:28:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.220 16:28:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.220 16:28:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.220 16:28:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.220 16:28:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.220 16:28:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.220 16:28:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.220 16:28:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.220 16:28:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.220 16:28:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.220 "name": "raid_bdev1", 00:12:46.220 "uuid": "d3fd3e26-5988-4c2e-ad55-f28708854d6b", 00:12:46.220 "strip_size_kb": 64, 00:12:46.220 "state": "online", 00:12:46.220 "raid_level": "raid0", 00:12:46.220 "superblock": true, 00:12:46.220 "num_base_bdevs": 4, 00:12:46.220 "num_base_bdevs_discovered": 4, 00:12:46.220 "num_base_bdevs_operational": 4, 00:12:46.220 "base_bdevs_list": [ 00:12:46.220 { 00:12:46.220 "name": "BaseBdev1", 00:12:46.221 "uuid": "6a552f68-c308-5547-b635-83891ae2744a", 00:12:46.221 "is_configured": true, 00:12:46.221 "data_offset": 2048, 00:12:46.221 "data_size": 63488 00:12:46.221 }, 00:12:46.221 { 00:12:46.221 "name": "BaseBdev2", 00:12:46.221 "uuid": "35ecd48d-a5c0-59ce-935e-1baee24abd14", 00:12:46.221 "is_configured": true, 00:12:46.221 "data_offset": 2048, 00:12:46.221 "data_size": 63488 00:12:46.221 }, 00:12:46.221 { 00:12:46.221 "name": "BaseBdev3", 00:12:46.221 "uuid": "a071aab6-e714-5bf4-a70f-e7ebe2763199", 00:12:46.221 "is_configured": true, 00:12:46.221 "data_offset": 2048, 00:12:46.221 "data_size": 63488 00:12:46.221 }, 00:12:46.221 { 00:12:46.221 "name": "BaseBdev4", 00:12:46.221 "uuid": "c2fd12fa-4366-5761-b1cf-a96bee158829", 00:12:46.221 "is_configured": true, 00:12:46.221 "data_offset": 2048, 00:12:46.221 "data_size": 63488 00:12:46.221 } 00:12:46.221 ] 00:12:46.221 }' 00:12:46.221 16:28:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.221 16:28:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.480 16:28:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:46.480 16:28:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.480 16:28:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.480 [2024-12-06 16:28:28.287926] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:46.480 [2024-12-06 16:28:28.287965] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:46.480 [2024-12-06 16:28:28.291033] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:46.480 [2024-12-06 16:28:28.291103] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:46.480 [2024-12-06 16:28:28.291157] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:46.480 [2024-12-06 16:28:28.291167] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:12:46.480 16:28:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.480 { 00:12:46.480 "results": [ 00:12:46.480 { 00:12:46.480 "job": "raid_bdev1", 00:12:46.480 "core_mask": "0x1", 00:12:46.480 "workload": "randrw", 00:12:46.480 "percentage": 50, 00:12:46.480 "status": "finished", 00:12:46.480 "queue_depth": 1, 00:12:46.480 "io_size": 131072, 00:12:46.480 "runtime": 1.341164, 00:12:46.480 "iops": 14745.400264248072, 00:12:46.480 "mibps": 1843.175033031009, 00:12:46.480 "io_failed": 1, 00:12:46.480 "io_timeout": 0, 00:12:46.480 "avg_latency_us": 93.65613949245882, 00:12:46.480 "min_latency_us": 28.28296943231441, 00:12:46.480 "max_latency_us": 1752.8733624454148 00:12:46.480 } 00:12:46.480 ], 00:12:46.480 "core_count": 1 00:12:46.480 } 00:12:46.480 16:28:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 82305 00:12:46.480 16:28:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 82305 ']' 00:12:46.480 16:28:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 82305 00:12:46.480 16:28:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:46.480 16:28:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:46.480 16:28:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82305 00:12:46.739 16:28:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:46.739 16:28:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:46.739 killing process with pid 82305 00:12:46.739 16:28:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82305' 00:12:46.739 16:28:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 82305 00:12:46.739 [2024-12-06 16:28:28.334124] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:46.739 16:28:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 82305 00:12:46.739 [2024-12-06 16:28:28.370957] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:46.998 16:28:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.QMVOeE8Qkg 00:12:46.998 16:28:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:46.998 16:28:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:46.998 16:28:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:12:46.998 16:28:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:46.999 16:28:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:46.999 16:28:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:46.999 16:28:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:12:46.999 00:12:46.999 real 0m3.403s 00:12:46.999 user 0m4.368s 00:12:46.999 sys 0m0.534s 00:12:46.999 16:28:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:46.999 16:28:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.999 ************************************ 00:12:46.999 END TEST raid_read_error_test 00:12:46.999 ************************************ 00:12:46.999 16:28:28 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:12:46.999 16:28:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:46.999 16:28:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:46.999 16:28:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:46.999 ************************************ 00:12:46.999 START TEST raid_write_error_test 00:12:46.999 ************************************ 00:12:46.999 16:28:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:12:46.999 16:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:12:46.999 16:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:46.999 16:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:46.999 16:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:46.999 16:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:46.999 16:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:46.999 16:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:46.999 16:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:46.999 16:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:46.999 16:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:46.999 16:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:46.999 16:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:46.999 16:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:46.999 16:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:46.999 16:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:46.999 16:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:46.999 16:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:46.999 16:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:46.999 16:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:46.999 16:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:46.999 16:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:46.999 16:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:46.999 16:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:46.999 16:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:46.999 16:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:12:46.999 16:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:46.999 16:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:46.999 16:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:46.999 16:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.shgjfswTff 00:12:46.999 16:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=82439 00:12:46.999 16:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:46.999 16:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 82439 00:12:46.999 16:28:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 82439 ']' 00:12:46.999 16:28:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.999 16:28:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:46.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.999 16:28:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.999 16:28:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:46.999 16:28:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.999 [2024-12-06 16:28:28.767800] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:12:46.999 [2024-12-06 16:28:28.767929] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82439 ] 00:12:47.259 [2024-12-06 16:28:28.940140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.259 [2024-12-06 16:28:28.966514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.259 [2024-12-06 16:28:29.009786] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:47.259 [2024-12-06 16:28:29.009827] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:47.874 16:28:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:47.874 16:28:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:47.874 16:28:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:47.874 16:28:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:47.874 16:28:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.874 16:28:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.874 BaseBdev1_malloc 00:12:47.874 16:28:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.874 16:28:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:47.874 16:28:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.874 16:28:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.874 true 00:12:47.874 16:28:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.874 16:28:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:47.874 16:28:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.874 16:28:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.874 [2024-12-06 16:28:29.653651] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:47.874 [2024-12-06 16:28:29.653704] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.874 [2024-12-06 16:28:29.653728] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:47.874 [2024-12-06 16:28:29.653738] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.874 [2024-12-06 16:28:29.656111] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.874 [2024-12-06 16:28:29.656146] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:47.874 BaseBdev1 00:12:47.874 16:28:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.874 16:28:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:47.874 16:28:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:47.874 16:28:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.874 16:28:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.874 BaseBdev2_malloc 00:12:47.874 16:28:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.874 16:28:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:47.874 16:28:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.874 16:28:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.874 true 00:12:47.874 16:28:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.874 16:28:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:47.874 16:28:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.874 16:28:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.874 [2024-12-06 16:28:29.694489] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:47.874 [2024-12-06 16:28:29.694541] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.874 [2024-12-06 16:28:29.694562] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:47.874 [2024-12-06 16:28:29.694572] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.874 [2024-12-06 16:28:29.696990] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.874 [2024-12-06 16:28:29.697025] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:47.874 BaseBdev2 00:12:47.874 16:28:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.874 16:28:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:47.874 16:28:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:47.874 16:28:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.874 16:28:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.134 BaseBdev3_malloc 00:12:48.134 16:28:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.134 16:28:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:48.134 16:28:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.134 16:28:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.134 true 00:12:48.134 16:28:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.134 16:28:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:48.134 16:28:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.134 16:28:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.134 [2024-12-06 16:28:29.735239] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:48.134 [2024-12-06 16:28:29.735280] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:48.134 [2024-12-06 16:28:29.735299] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:48.134 [2024-12-06 16:28:29.735309] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:48.134 [2024-12-06 16:28:29.737549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:48.134 [2024-12-06 16:28:29.737580] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:48.134 BaseBdev3 00:12:48.134 16:28:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.134 16:28:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:48.134 16:28:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:48.134 16:28:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.134 16:28:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.134 BaseBdev4_malloc 00:12:48.134 16:28:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.134 16:28:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:48.134 16:28:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.134 16:28:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.134 true 00:12:48.134 16:28:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.134 16:28:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:48.134 16:28:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.134 16:28:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.134 [2024-12-06 16:28:29.787310] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:48.134 [2024-12-06 16:28:29.787353] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:48.134 [2024-12-06 16:28:29.787376] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:48.134 [2024-12-06 16:28:29.787385] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:48.134 [2024-12-06 16:28:29.789704] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:48.134 [2024-12-06 16:28:29.789737] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:48.134 BaseBdev4 00:12:48.134 16:28:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.134 16:28:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:48.134 16:28:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.134 16:28:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.134 [2024-12-06 16:28:29.799359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:48.135 [2024-12-06 16:28:29.801373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:48.135 [2024-12-06 16:28:29.801468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:48.135 [2024-12-06 16:28:29.801523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:48.135 [2024-12-06 16:28:29.801740] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:12:48.135 [2024-12-06 16:28:29.801761] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:48.135 [2024-12-06 16:28:29.802061] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:12:48.135 [2024-12-06 16:28:29.802240] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:12:48.135 [2024-12-06 16:28:29.802263] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:12:48.135 [2024-12-06 16:28:29.802399] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:48.135 16:28:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.135 16:28:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:48.135 16:28:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:48.135 16:28:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:48.135 16:28:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:48.135 16:28:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:48.135 16:28:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:48.135 16:28:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.135 16:28:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.135 16:28:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.135 16:28:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.135 16:28:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.135 16:28:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.135 16:28:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.135 16:28:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.135 16:28:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.135 16:28:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.135 "name": "raid_bdev1", 00:12:48.135 "uuid": "b2bb8a0c-c67a-4cf3-a658-cd97c9b92468", 00:12:48.135 "strip_size_kb": 64, 00:12:48.135 "state": "online", 00:12:48.135 "raid_level": "raid0", 00:12:48.135 "superblock": true, 00:12:48.135 "num_base_bdevs": 4, 00:12:48.135 "num_base_bdevs_discovered": 4, 00:12:48.135 "num_base_bdevs_operational": 4, 00:12:48.135 "base_bdevs_list": [ 00:12:48.135 { 00:12:48.135 "name": "BaseBdev1", 00:12:48.135 "uuid": "69146e8c-674e-5b8e-b12d-e5f29edb61b9", 00:12:48.135 "is_configured": true, 00:12:48.135 "data_offset": 2048, 00:12:48.135 "data_size": 63488 00:12:48.135 }, 00:12:48.135 { 00:12:48.135 "name": "BaseBdev2", 00:12:48.135 "uuid": "904e73f2-9b5e-5207-880f-c3388950d5c4", 00:12:48.135 "is_configured": true, 00:12:48.135 "data_offset": 2048, 00:12:48.135 "data_size": 63488 00:12:48.135 }, 00:12:48.135 { 00:12:48.135 "name": "BaseBdev3", 00:12:48.135 "uuid": "d6cd0c4d-c8bd-5ddd-b5b6-881eb5a9f417", 00:12:48.135 "is_configured": true, 00:12:48.135 "data_offset": 2048, 00:12:48.135 "data_size": 63488 00:12:48.135 }, 00:12:48.135 { 00:12:48.135 "name": "BaseBdev4", 00:12:48.135 "uuid": "25540e29-3a46-586d-a659-7cb5833b5f50", 00:12:48.135 "is_configured": true, 00:12:48.135 "data_offset": 2048, 00:12:48.135 "data_size": 63488 00:12:48.135 } 00:12:48.135 ] 00:12:48.135 }' 00:12:48.135 16:28:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.135 16:28:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.705 16:28:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:48.705 16:28:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:48.705 [2024-12-06 16:28:30.362764] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:49.644 16:28:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:49.644 16:28:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.644 16:28:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.644 16:28:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.644 16:28:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:49.644 16:28:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:49.644 16:28:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:49.644 16:28:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:49.644 16:28:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:49.644 16:28:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:49.644 16:28:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:49.644 16:28:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:49.644 16:28:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:49.644 16:28:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.644 16:28:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.644 16:28:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.644 16:28:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.644 16:28:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.644 16:28:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.644 16:28:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.644 16:28:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.644 16:28:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.644 16:28:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.644 "name": "raid_bdev1", 00:12:49.644 "uuid": "b2bb8a0c-c67a-4cf3-a658-cd97c9b92468", 00:12:49.644 "strip_size_kb": 64, 00:12:49.644 "state": "online", 00:12:49.644 "raid_level": "raid0", 00:12:49.644 "superblock": true, 00:12:49.644 "num_base_bdevs": 4, 00:12:49.644 "num_base_bdevs_discovered": 4, 00:12:49.644 "num_base_bdevs_operational": 4, 00:12:49.644 "base_bdevs_list": [ 00:12:49.644 { 00:12:49.644 "name": "BaseBdev1", 00:12:49.644 "uuid": "69146e8c-674e-5b8e-b12d-e5f29edb61b9", 00:12:49.644 "is_configured": true, 00:12:49.644 "data_offset": 2048, 00:12:49.645 "data_size": 63488 00:12:49.645 }, 00:12:49.645 { 00:12:49.645 "name": "BaseBdev2", 00:12:49.645 "uuid": "904e73f2-9b5e-5207-880f-c3388950d5c4", 00:12:49.645 "is_configured": true, 00:12:49.645 "data_offset": 2048, 00:12:49.645 "data_size": 63488 00:12:49.645 }, 00:12:49.645 { 00:12:49.645 "name": "BaseBdev3", 00:12:49.645 "uuid": "d6cd0c4d-c8bd-5ddd-b5b6-881eb5a9f417", 00:12:49.645 "is_configured": true, 00:12:49.645 "data_offset": 2048, 00:12:49.645 "data_size": 63488 00:12:49.645 }, 00:12:49.645 { 00:12:49.645 "name": "BaseBdev4", 00:12:49.645 "uuid": "25540e29-3a46-586d-a659-7cb5833b5f50", 00:12:49.645 "is_configured": true, 00:12:49.645 "data_offset": 2048, 00:12:49.645 "data_size": 63488 00:12:49.645 } 00:12:49.645 ] 00:12:49.645 }' 00:12:49.645 16:28:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.645 16:28:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.213 16:28:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:50.213 16:28:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.213 16:28:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.213 [2024-12-06 16:28:31.779737] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:50.213 [2024-12-06 16:28:31.779775] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:50.213 [2024-12-06 16:28:31.782738] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:50.213 [2024-12-06 16:28:31.782802] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:50.213 [2024-12-06 16:28:31.782849] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:50.213 [2024-12-06 16:28:31.782859] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:12:50.213 { 00:12:50.213 "results": [ 00:12:50.213 { 00:12:50.213 "job": "raid_bdev1", 00:12:50.213 "core_mask": "0x1", 00:12:50.213 "workload": "randrw", 00:12:50.213 "percentage": 50, 00:12:50.213 "status": "finished", 00:12:50.213 "queue_depth": 1, 00:12:50.213 "io_size": 131072, 00:12:50.213 "runtime": 1.41766, 00:12:50.213 "iops": 14903.432416799515, 00:12:50.213 "mibps": 1862.9290520999393, 00:12:50.213 "io_failed": 1, 00:12:50.213 "io_timeout": 0, 00:12:50.214 "avg_latency_us": 92.62404414884568, 00:12:50.214 "min_latency_us": 27.276855895196505, 00:12:50.214 "max_latency_us": 1538.235807860262 00:12:50.214 } 00:12:50.214 ], 00:12:50.214 "core_count": 1 00:12:50.214 } 00:12:50.214 16:28:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.214 16:28:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 82439 00:12:50.214 16:28:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 82439 ']' 00:12:50.214 16:28:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 82439 00:12:50.214 16:28:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:50.214 16:28:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:50.214 16:28:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82439 00:12:50.214 16:28:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:50.214 16:28:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:50.214 killing process with pid 82439 00:12:50.214 16:28:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82439' 00:12:50.214 16:28:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 82439 00:12:50.214 [2024-12-06 16:28:31.825668] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:50.214 16:28:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 82439 00:12:50.214 [2024-12-06 16:28:31.860950] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:50.473 16:28:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.shgjfswTff 00:12:50.473 16:28:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:50.473 16:28:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:50.473 16:28:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:12:50.473 16:28:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:50.473 16:28:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:50.473 16:28:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:50.473 16:28:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:12:50.473 00:12:50.473 real 0m3.421s 00:12:50.473 user 0m4.381s 00:12:50.473 sys 0m0.568s 00:12:50.473 16:28:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:50.473 16:28:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.473 ************************************ 00:12:50.473 END TEST raid_write_error_test 00:12:50.473 ************************************ 00:12:50.473 16:28:32 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:50.473 16:28:32 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:12:50.473 16:28:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:50.473 16:28:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:50.473 16:28:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:50.473 ************************************ 00:12:50.473 START TEST raid_state_function_test 00:12:50.473 ************************************ 00:12:50.473 16:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:12:50.473 16:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:50.473 16:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:50.473 16:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:50.473 16:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:50.473 16:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:50.473 16:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:50.473 16:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:50.473 16:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:50.473 16:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:50.473 16:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:50.473 16:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:50.473 16:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:50.473 16:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:50.473 16:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:50.473 16:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:50.473 16:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:50.473 16:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:50.473 16:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:50.473 16:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:50.473 16:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:50.473 16:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:50.473 16:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:50.473 16:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:50.473 16:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:50.473 16:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:50.473 16:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:50.473 16:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:50.473 16:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:50.473 16:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:50.473 16:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82566 00:12:50.473 16:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:50.473 16:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82566' 00:12:50.473 Process raid pid: 82566 00:12:50.473 16:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82566 00:12:50.473 16:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 82566 ']' 00:12:50.473 16:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.473 16:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:50.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.473 16:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.473 16:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:50.473 16:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.473 [2024-12-06 16:28:32.247697] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:12:50.473 [2024-12-06 16:28:32.247823] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:50.731 [2024-12-06 16:28:32.420996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.731 [2024-12-06 16:28:32.450256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.731 [2024-12-06 16:28:32.494106] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:50.731 [2024-12-06 16:28:32.494158] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:51.296 16:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:51.296 16:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:51.296 16:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:51.296 16:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.296 16:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.296 [2024-12-06 16:28:33.097420] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:51.296 [2024-12-06 16:28:33.097491] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:51.296 [2024-12-06 16:28:33.097510] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:51.296 [2024-12-06 16:28:33.097524] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:51.296 [2024-12-06 16:28:33.097531] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:51.296 [2024-12-06 16:28:33.097544] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:51.296 [2024-12-06 16:28:33.097551] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:51.296 [2024-12-06 16:28:33.097561] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:51.296 16:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.296 16:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:51.296 16:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:51.296 16:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:51.296 16:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:51.296 16:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:51.296 16:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:51.296 16:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.296 16:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.296 16:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.296 16:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.296 16:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.296 16:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.296 16:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.296 16:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.296 16:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.555 16:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.555 "name": "Existed_Raid", 00:12:51.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.555 "strip_size_kb": 64, 00:12:51.555 "state": "configuring", 00:12:51.555 "raid_level": "concat", 00:12:51.555 "superblock": false, 00:12:51.555 "num_base_bdevs": 4, 00:12:51.555 "num_base_bdevs_discovered": 0, 00:12:51.555 "num_base_bdevs_operational": 4, 00:12:51.555 "base_bdevs_list": [ 00:12:51.555 { 00:12:51.555 "name": "BaseBdev1", 00:12:51.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.555 "is_configured": false, 00:12:51.555 "data_offset": 0, 00:12:51.555 "data_size": 0 00:12:51.555 }, 00:12:51.555 { 00:12:51.555 "name": "BaseBdev2", 00:12:51.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.555 "is_configured": false, 00:12:51.555 "data_offset": 0, 00:12:51.555 "data_size": 0 00:12:51.555 }, 00:12:51.555 { 00:12:51.555 "name": "BaseBdev3", 00:12:51.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.555 "is_configured": false, 00:12:51.555 "data_offset": 0, 00:12:51.555 "data_size": 0 00:12:51.555 }, 00:12:51.555 { 00:12:51.555 "name": "BaseBdev4", 00:12:51.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.555 "is_configured": false, 00:12:51.555 "data_offset": 0, 00:12:51.555 "data_size": 0 00:12:51.555 } 00:12:51.555 ] 00:12:51.555 }' 00:12:51.555 16:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.555 16:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.814 16:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:51.814 16:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.814 16:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.814 [2024-12-06 16:28:33.552559] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:51.814 [2024-12-06 16:28:33.552615] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:12:51.814 16:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.814 16:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:51.814 16:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.814 16:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.814 [2024-12-06 16:28:33.564531] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:51.814 [2024-12-06 16:28:33.564574] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:51.814 [2024-12-06 16:28:33.564583] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:51.814 [2024-12-06 16:28:33.564594] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:51.814 [2024-12-06 16:28:33.564601] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:51.814 [2024-12-06 16:28:33.564611] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:51.814 [2024-12-06 16:28:33.564618] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:51.814 [2024-12-06 16:28:33.564627] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:51.814 16:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.814 16:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:51.814 16:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.814 16:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.814 [2024-12-06 16:28:33.585704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:51.814 BaseBdev1 00:12:51.814 16:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.814 16:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:51.814 16:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:51.814 16:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:51.814 16:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:51.814 16:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:51.814 16:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:51.814 16:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:51.814 16:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.814 16:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.815 16:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.815 16:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:51.815 16:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.815 16:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.815 [ 00:12:51.815 { 00:12:51.815 "name": "BaseBdev1", 00:12:51.815 "aliases": [ 00:12:51.815 "c0052e9e-9db2-4bad-995a-f07bb5eb4e85" 00:12:51.815 ], 00:12:51.815 "product_name": "Malloc disk", 00:12:51.815 "block_size": 512, 00:12:51.815 "num_blocks": 65536, 00:12:51.815 "uuid": "c0052e9e-9db2-4bad-995a-f07bb5eb4e85", 00:12:51.815 "assigned_rate_limits": { 00:12:51.815 "rw_ios_per_sec": 0, 00:12:51.815 "rw_mbytes_per_sec": 0, 00:12:51.815 "r_mbytes_per_sec": 0, 00:12:51.815 "w_mbytes_per_sec": 0 00:12:51.815 }, 00:12:51.815 "claimed": true, 00:12:51.815 "claim_type": "exclusive_write", 00:12:51.815 "zoned": false, 00:12:51.815 "supported_io_types": { 00:12:51.815 "read": true, 00:12:51.815 "write": true, 00:12:51.815 "unmap": true, 00:12:51.815 "flush": true, 00:12:51.815 "reset": true, 00:12:51.815 "nvme_admin": false, 00:12:51.815 "nvme_io": false, 00:12:51.815 "nvme_io_md": false, 00:12:51.815 "write_zeroes": true, 00:12:51.815 "zcopy": true, 00:12:51.815 "get_zone_info": false, 00:12:51.815 "zone_management": false, 00:12:51.815 "zone_append": false, 00:12:51.815 "compare": false, 00:12:51.815 "compare_and_write": false, 00:12:51.815 "abort": true, 00:12:51.815 "seek_hole": false, 00:12:51.815 "seek_data": false, 00:12:51.815 "copy": true, 00:12:51.815 "nvme_iov_md": false 00:12:51.815 }, 00:12:51.815 "memory_domains": [ 00:12:51.815 { 00:12:51.815 "dma_device_id": "system", 00:12:51.815 "dma_device_type": 1 00:12:51.815 }, 00:12:51.815 { 00:12:51.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.815 "dma_device_type": 2 00:12:51.815 } 00:12:51.815 ], 00:12:51.815 "driver_specific": {} 00:12:51.815 } 00:12:51.815 ] 00:12:51.815 16:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.815 16:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:51.815 16:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:51.815 16:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:51.815 16:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:51.815 16:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:51.815 16:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:51.815 16:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:51.815 16:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.815 16:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.815 16:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.815 16:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.815 16:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.815 16:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.815 16:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.815 16:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.815 16:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.074 16:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.074 "name": "Existed_Raid", 00:12:52.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.074 "strip_size_kb": 64, 00:12:52.074 "state": "configuring", 00:12:52.074 "raid_level": "concat", 00:12:52.074 "superblock": false, 00:12:52.074 "num_base_bdevs": 4, 00:12:52.074 "num_base_bdevs_discovered": 1, 00:12:52.074 "num_base_bdevs_operational": 4, 00:12:52.074 "base_bdevs_list": [ 00:12:52.074 { 00:12:52.074 "name": "BaseBdev1", 00:12:52.074 "uuid": "c0052e9e-9db2-4bad-995a-f07bb5eb4e85", 00:12:52.074 "is_configured": true, 00:12:52.074 "data_offset": 0, 00:12:52.074 "data_size": 65536 00:12:52.074 }, 00:12:52.074 { 00:12:52.074 "name": "BaseBdev2", 00:12:52.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.074 "is_configured": false, 00:12:52.074 "data_offset": 0, 00:12:52.074 "data_size": 0 00:12:52.074 }, 00:12:52.074 { 00:12:52.074 "name": "BaseBdev3", 00:12:52.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.075 "is_configured": false, 00:12:52.075 "data_offset": 0, 00:12:52.075 "data_size": 0 00:12:52.075 }, 00:12:52.075 { 00:12:52.075 "name": "BaseBdev4", 00:12:52.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.075 "is_configured": false, 00:12:52.075 "data_offset": 0, 00:12:52.075 "data_size": 0 00:12:52.075 } 00:12:52.075 ] 00:12:52.075 }' 00:12:52.075 16:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.075 16:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.333 16:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:52.333 16:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.333 16:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.333 [2024-12-06 16:28:34.056997] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:52.333 [2024-12-06 16:28:34.057055] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:12:52.333 16:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.333 16:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:52.333 16:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.333 16:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.333 [2024-12-06 16:28:34.069020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:52.333 [2024-12-06 16:28:34.071021] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:52.334 [2024-12-06 16:28:34.071061] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:52.334 [2024-12-06 16:28:34.071071] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:52.334 [2024-12-06 16:28:34.071080] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:52.334 [2024-12-06 16:28:34.071086] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:52.334 [2024-12-06 16:28:34.071094] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:52.334 16:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.334 16:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:52.334 16:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:52.334 16:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:52.334 16:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:52.334 16:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:52.334 16:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:52.334 16:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:52.334 16:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:52.334 16:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.334 16:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.334 16:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.334 16:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.334 16:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.334 16:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.334 16:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.334 16:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.334 16:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.334 16:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.334 "name": "Existed_Raid", 00:12:52.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.334 "strip_size_kb": 64, 00:12:52.334 "state": "configuring", 00:12:52.334 "raid_level": "concat", 00:12:52.334 "superblock": false, 00:12:52.334 "num_base_bdevs": 4, 00:12:52.334 "num_base_bdevs_discovered": 1, 00:12:52.334 "num_base_bdevs_operational": 4, 00:12:52.334 "base_bdevs_list": [ 00:12:52.334 { 00:12:52.334 "name": "BaseBdev1", 00:12:52.334 "uuid": "c0052e9e-9db2-4bad-995a-f07bb5eb4e85", 00:12:52.334 "is_configured": true, 00:12:52.334 "data_offset": 0, 00:12:52.334 "data_size": 65536 00:12:52.334 }, 00:12:52.334 { 00:12:52.334 "name": "BaseBdev2", 00:12:52.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.334 "is_configured": false, 00:12:52.334 "data_offset": 0, 00:12:52.334 "data_size": 0 00:12:52.334 }, 00:12:52.334 { 00:12:52.334 "name": "BaseBdev3", 00:12:52.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.334 "is_configured": false, 00:12:52.334 "data_offset": 0, 00:12:52.334 "data_size": 0 00:12:52.334 }, 00:12:52.334 { 00:12:52.334 "name": "BaseBdev4", 00:12:52.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.334 "is_configured": false, 00:12:52.334 "data_offset": 0, 00:12:52.334 "data_size": 0 00:12:52.334 } 00:12:52.334 ] 00:12:52.334 }' 00:12:52.334 16:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.334 16:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.901 16:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:52.901 16:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.901 16:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.901 [2024-12-06 16:28:34.519563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:52.901 BaseBdev2 00:12:52.901 16:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.901 16:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:52.901 16:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:52.901 16:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:52.901 16:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:52.901 16:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:52.901 16:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:52.901 16:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:52.901 16:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.901 16:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.901 16:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.901 16:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:52.901 16:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.901 16:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.901 [ 00:12:52.901 { 00:12:52.901 "name": "BaseBdev2", 00:12:52.901 "aliases": [ 00:12:52.901 "df7fe5d4-9adb-4138-b37c-f7b993887a9e" 00:12:52.901 ], 00:12:52.901 "product_name": "Malloc disk", 00:12:52.901 "block_size": 512, 00:12:52.901 "num_blocks": 65536, 00:12:52.901 "uuid": "df7fe5d4-9adb-4138-b37c-f7b993887a9e", 00:12:52.901 "assigned_rate_limits": { 00:12:52.901 "rw_ios_per_sec": 0, 00:12:52.901 "rw_mbytes_per_sec": 0, 00:12:52.901 "r_mbytes_per_sec": 0, 00:12:52.901 "w_mbytes_per_sec": 0 00:12:52.901 }, 00:12:52.901 "claimed": true, 00:12:52.901 "claim_type": "exclusive_write", 00:12:52.901 "zoned": false, 00:12:52.901 "supported_io_types": { 00:12:52.901 "read": true, 00:12:52.901 "write": true, 00:12:52.901 "unmap": true, 00:12:52.901 "flush": true, 00:12:52.901 "reset": true, 00:12:52.901 "nvme_admin": false, 00:12:52.901 "nvme_io": false, 00:12:52.901 "nvme_io_md": false, 00:12:52.901 "write_zeroes": true, 00:12:52.901 "zcopy": true, 00:12:52.901 "get_zone_info": false, 00:12:52.901 "zone_management": false, 00:12:52.901 "zone_append": false, 00:12:52.901 "compare": false, 00:12:52.901 "compare_and_write": false, 00:12:52.901 "abort": true, 00:12:52.901 "seek_hole": false, 00:12:52.901 "seek_data": false, 00:12:52.901 "copy": true, 00:12:52.901 "nvme_iov_md": false 00:12:52.901 }, 00:12:52.901 "memory_domains": [ 00:12:52.901 { 00:12:52.901 "dma_device_id": "system", 00:12:52.901 "dma_device_type": 1 00:12:52.901 }, 00:12:52.901 { 00:12:52.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.901 "dma_device_type": 2 00:12:52.901 } 00:12:52.901 ], 00:12:52.901 "driver_specific": {} 00:12:52.901 } 00:12:52.901 ] 00:12:52.901 16:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.901 16:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:52.901 16:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:52.901 16:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:52.901 16:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:52.901 16:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:52.901 16:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:52.901 16:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:52.901 16:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:52.901 16:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:52.901 16:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.901 16:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.901 16:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.901 16:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.901 16:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.901 16:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.901 16:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.901 16:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.901 16:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.901 16:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.901 "name": "Existed_Raid", 00:12:52.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.901 "strip_size_kb": 64, 00:12:52.901 "state": "configuring", 00:12:52.901 "raid_level": "concat", 00:12:52.901 "superblock": false, 00:12:52.901 "num_base_bdevs": 4, 00:12:52.901 "num_base_bdevs_discovered": 2, 00:12:52.901 "num_base_bdevs_operational": 4, 00:12:52.901 "base_bdevs_list": [ 00:12:52.901 { 00:12:52.901 "name": "BaseBdev1", 00:12:52.901 "uuid": "c0052e9e-9db2-4bad-995a-f07bb5eb4e85", 00:12:52.901 "is_configured": true, 00:12:52.901 "data_offset": 0, 00:12:52.901 "data_size": 65536 00:12:52.901 }, 00:12:52.901 { 00:12:52.901 "name": "BaseBdev2", 00:12:52.901 "uuid": "df7fe5d4-9adb-4138-b37c-f7b993887a9e", 00:12:52.902 "is_configured": true, 00:12:52.902 "data_offset": 0, 00:12:52.902 "data_size": 65536 00:12:52.902 }, 00:12:52.902 { 00:12:52.902 "name": "BaseBdev3", 00:12:52.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.902 "is_configured": false, 00:12:52.902 "data_offset": 0, 00:12:52.902 "data_size": 0 00:12:52.902 }, 00:12:52.902 { 00:12:52.902 "name": "BaseBdev4", 00:12:52.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.902 "is_configured": false, 00:12:52.902 "data_offset": 0, 00:12:52.902 "data_size": 0 00:12:52.902 } 00:12:52.902 ] 00:12:52.902 }' 00:12:52.902 16:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.902 16:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.469 16:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:53.469 16:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.469 16:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.469 [2024-12-06 16:28:35.036408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:53.469 BaseBdev3 00:12:53.469 16:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.469 16:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:53.469 16:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:53.469 16:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:53.469 16:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:53.469 16:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:53.469 16:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:53.469 16:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:53.469 16:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.469 16:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.469 16:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.469 16:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:53.469 16:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.469 16:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.469 [ 00:12:53.469 { 00:12:53.469 "name": "BaseBdev3", 00:12:53.469 "aliases": [ 00:12:53.469 "c077be04-d520-4d9f-b9e9-9b34ae310da8" 00:12:53.469 ], 00:12:53.469 "product_name": "Malloc disk", 00:12:53.469 "block_size": 512, 00:12:53.469 "num_blocks": 65536, 00:12:53.469 "uuid": "c077be04-d520-4d9f-b9e9-9b34ae310da8", 00:12:53.469 "assigned_rate_limits": { 00:12:53.469 "rw_ios_per_sec": 0, 00:12:53.469 "rw_mbytes_per_sec": 0, 00:12:53.469 "r_mbytes_per_sec": 0, 00:12:53.469 "w_mbytes_per_sec": 0 00:12:53.469 }, 00:12:53.469 "claimed": true, 00:12:53.469 "claim_type": "exclusive_write", 00:12:53.469 "zoned": false, 00:12:53.469 "supported_io_types": { 00:12:53.469 "read": true, 00:12:53.469 "write": true, 00:12:53.469 "unmap": true, 00:12:53.469 "flush": true, 00:12:53.469 "reset": true, 00:12:53.469 "nvme_admin": false, 00:12:53.469 "nvme_io": false, 00:12:53.469 "nvme_io_md": false, 00:12:53.469 "write_zeroes": true, 00:12:53.469 "zcopy": true, 00:12:53.469 "get_zone_info": false, 00:12:53.469 "zone_management": false, 00:12:53.469 "zone_append": false, 00:12:53.469 "compare": false, 00:12:53.469 "compare_and_write": false, 00:12:53.469 "abort": true, 00:12:53.469 "seek_hole": false, 00:12:53.469 "seek_data": false, 00:12:53.469 "copy": true, 00:12:53.469 "nvme_iov_md": false 00:12:53.469 }, 00:12:53.469 "memory_domains": [ 00:12:53.469 { 00:12:53.469 "dma_device_id": "system", 00:12:53.469 "dma_device_type": 1 00:12:53.469 }, 00:12:53.469 { 00:12:53.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.469 "dma_device_type": 2 00:12:53.469 } 00:12:53.469 ], 00:12:53.469 "driver_specific": {} 00:12:53.469 } 00:12:53.469 ] 00:12:53.469 16:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.469 16:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:53.469 16:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:53.469 16:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:53.469 16:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:53.469 16:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:53.469 16:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:53.469 16:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:53.469 16:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:53.469 16:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:53.469 16:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.469 16:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.469 16:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.469 16:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.469 16:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.469 16:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:53.469 16:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.469 16:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.469 16:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.469 16:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.469 "name": "Existed_Raid", 00:12:53.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.469 "strip_size_kb": 64, 00:12:53.469 "state": "configuring", 00:12:53.469 "raid_level": "concat", 00:12:53.469 "superblock": false, 00:12:53.469 "num_base_bdevs": 4, 00:12:53.469 "num_base_bdevs_discovered": 3, 00:12:53.469 "num_base_bdevs_operational": 4, 00:12:53.469 "base_bdevs_list": [ 00:12:53.469 { 00:12:53.469 "name": "BaseBdev1", 00:12:53.470 "uuid": "c0052e9e-9db2-4bad-995a-f07bb5eb4e85", 00:12:53.470 "is_configured": true, 00:12:53.470 "data_offset": 0, 00:12:53.470 "data_size": 65536 00:12:53.470 }, 00:12:53.470 { 00:12:53.470 "name": "BaseBdev2", 00:12:53.470 "uuid": "df7fe5d4-9adb-4138-b37c-f7b993887a9e", 00:12:53.470 "is_configured": true, 00:12:53.470 "data_offset": 0, 00:12:53.470 "data_size": 65536 00:12:53.470 }, 00:12:53.470 { 00:12:53.470 "name": "BaseBdev3", 00:12:53.470 "uuid": "c077be04-d520-4d9f-b9e9-9b34ae310da8", 00:12:53.470 "is_configured": true, 00:12:53.470 "data_offset": 0, 00:12:53.470 "data_size": 65536 00:12:53.470 }, 00:12:53.470 { 00:12:53.470 "name": "BaseBdev4", 00:12:53.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.470 "is_configured": false, 00:12:53.470 "data_offset": 0, 00:12:53.470 "data_size": 0 00:12:53.470 } 00:12:53.470 ] 00:12:53.470 }' 00:12:53.470 16:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.470 16:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.727 16:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:53.727 16:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.727 16:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.727 [2024-12-06 16:28:35.494873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:53.727 [2024-12-06 16:28:35.494936] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:12:53.727 [2024-12-06 16:28:35.494956] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:53.727 [2024-12-06 16:28:35.495295] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:53.727 [2024-12-06 16:28:35.495450] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:12:53.727 [2024-12-06 16:28:35.495471] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:12:53.727 [2024-12-06 16:28:35.495691] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:53.727 BaseBdev4 00:12:53.727 16:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.727 16:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:53.727 16:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:53.727 16:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:53.727 16:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:53.727 16:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:53.727 16:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:53.727 16:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:53.727 16:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.727 16:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.727 16:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.727 16:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:53.727 16:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.727 16:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.727 [ 00:12:53.727 { 00:12:53.727 "name": "BaseBdev4", 00:12:53.727 "aliases": [ 00:12:53.727 "8a1cf669-4672-429b-953b-ddeeda251fbe" 00:12:53.727 ], 00:12:53.727 "product_name": "Malloc disk", 00:12:53.727 "block_size": 512, 00:12:53.727 "num_blocks": 65536, 00:12:53.727 "uuid": "8a1cf669-4672-429b-953b-ddeeda251fbe", 00:12:53.727 "assigned_rate_limits": { 00:12:53.727 "rw_ios_per_sec": 0, 00:12:53.727 "rw_mbytes_per_sec": 0, 00:12:53.727 "r_mbytes_per_sec": 0, 00:12:53.727 "w_mbytes_per_sec": 0 00:12:53.727 }, 00:12:53.727 "claimed": true, 00:12:53.727 "claim_type": "exclusive_write", 00:12:53.727 "zoned": false, 00:12:53.727 "supported_io_types": { 00:12:53.727 "read": true, 00:12:53.727 "write": true, 00:12:53.727 "unmap": true, 00:12:53.727 "flush": true, 00:12:53.727 "reset": true, 00:12:53.727 "nvme_admin": false, 00:12:53.727 "nvme_io": false, 00:12:53.727 "nvme_io_md": false, 00:12:53.727 "write_zeroes": true, 00:12:53.727 "zcopy": true, 00:12:53.727 "get_zone_info": false, 00:12:53.727 "zone_management": false, 00:12:53.727 "zone_append": false, 00:12:53.727 "compare": false, 00:12:53.727 "compare_and_write": false, 00:12:53.727 "abort": true, 00:12:53.727 "seek_hole": false, 00:12:53.727 "seek_data": false, 00:12:53.727 "copy": true, 00:12:53.727 "nvme_iov_md": false 00:12:53.727 }, 00:12:53.727 "memory_domains": [ 00:12:53.727 { 00:12:53.727 "dma_device_id": "system", 00:12:53.727 "dma_device_type": 1 00:12:53.727 }, 00:12:53.727 { 00:12:53.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.727 "dma_device_type": 2 00:12:53.727 } 00:12:53.727 ], 00:12:53.727 "driver_specific": {} 00:12:53.727 } 00:12:53.727 ] 00:12:53.727 16:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.727 16:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:53.727 16:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:53.727 16:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:53.727 16:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:53.727 16:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:53.727 16:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:53.727 16:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:53.727 16:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:53.727 16:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:53.727 16:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.727 16:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.727 16:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.727 16:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.727 16:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:53.727 16:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.727 16:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.727 16:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.727 16:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.986 16:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.986 "name": "Existed_Raid", 00:12:53.986 "uuid": "79c6ad59-974c-4c38-ab30-c0f5a606838a", 00:12:53.986 "strip_size_kb": 64, 00:12:53.986 "state": "online", 00:12:53.986 "raid_level": "concat", 00:12:53.986 "superblock": false, 00:12:53.986 "num_base_bdevs": 4, 00:12:53.986 "num_base_bdevs_discovered": 4, 00:12:53.986 "num_base_bdevs_operational": 4, 00:12:53.986 "base_bdevs_list": [ 00:12:53.986 { 00:12:53.986 "name": "BaseBdev1", 00:12:53.986 "uuid": "c0052e9e-9db2-4bad-995a-f07bb5eb4e85", 00:12:53.986 "is_configured": true, 00:12:53.986 "data_offset": 0, 00:12:53.986 "data_size": 65536 00:12:53.986 }, 00:12:53.986 { 00:12:53.986 "name": "BaseBdev2", 00:12:53.986 "uuid": "df7fe5d4-9adb-4138-b37c-f7b993887a9e", 00:12:53.986 "is_configured": true, 00:12:53.986 "data_offset": 0, 00:12:53.986 "data_size": 65536 00:12:53.986 }, 00:12:53.986 { 00:12:53.986 "name": "BaseBdev3", 00:12:53.986 "uuid": "c077be04-d520-4d9f-b9e9-9b34ae310da8", 00:12:53.986 "is_configured": true, 00:12:53.986 "data_offset": 0, 00:12:53.986 "data_size": 65536 00:12:53.986 }, 00:12:53.986 { 00:12:53.986 "name": "BaseBdev4", 00:12:53.986 "uuid": "8a1cf669-4672-429b-953b-ddeeda251fbe", 00:12:53.986 "is_configured": true, 00:12:53.986 "data_offset": 0, 00:12:53.986 "data_size": 65536 00:12:53.986 } 00:12:53.986 ] 00:12:53.986 }' 00:12:53.986 16:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.986 16:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.244 16:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:54.244 16:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:54.244 16:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:54.244 16:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:54.244 16:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:54.244 16:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:54.244 16:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:54.244 16:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.244 16:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:54.244 16:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.244 [2024-12-06 16:28:35.938662] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:54.244 16:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.244 16:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:54.244 "name": "Existed_Raid", 00:12:54.244 "aliases": [ 00:12:54.244 "79c6ad59-974c-4c38-ab30-c0f5a606838a" 00:12:54.244 ], 00:12:54.244 "product_name": "Raid Volume", 00:12:54.244 "block_size": 512, 00:12:54.244 "num_blocks": 262144, 00:12:54.244 "uuid": "79c6ad59-974c-4c38-ab30-c0f5a606838a", 00:12:54.244 "assigned_rate_limits": { 00:12:54.244 "rw_ios_per_sec": 0, 00:12:54.244 "rw_mbytes_per_sec": 0, 00:12:54.244 "r_mbytes_per_sec": 0, 00:12:54.244 "w_mbytes_per_sec": 0 00:12:54.244 }, 00:12:54.244 "claimed": false, 00:12:54.244 "zoned": false, 00:12:54.244 "supported_io_types": { 00:12:54.244 "read": true, 00:12:54.244 "write": true, 00:12:54.244 "unmap": true, 00:12:54.244 "flush": true, 00:12:54.244 "reset": true, 00:12:54.244 "nvme_admin": false, 00:12:54.244 "nvme_io": false, 00:12:54.244 "nvme_io_md": false, 00:12:54.244 "write_zeroes": true, 00:12:54.244 "zcopy": false, 00:12:54.244 "get_zone_info": false, 00:12:54.244 "zone_management": false, 00:12:54.244 "zone_append": false, 00:12:54.244 "compare": false, 00:12:54.244 "compare_and_write": false, 00:12:54.244 "abort": false, 00:12:54.244 "seek_hole": false, 00:12:54.244 "seek_data": false, 00:12:54.244 "copy": false, 00:12:54.244 "nvme_iov_md": false 00:12:54.244 }, 00:12:54.244 "memory_domains": [ 00:12:54.244 { 00:12:54.244 "dma_device_id": "system", 00:12:54.244 "dma_device_type": 1 00:12:54.244 }, 00:12:54.244 { 00:12:54.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.244 "dma_device_type": 2 00:12:54.244 }, 00:12:54.244 { 00:12:54.244 "dma_device_id": "system", 00:12:54.244 "dma_device_type": 1 00:12:54.244 }, 00:12:54.244 { 00:12:54.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.244 "dma_device_type": 2 00:12:54.244 }, 00:12:54.244 { 00:12:54.244 "dma_device_id": "system", 00:12:54.244 "dma_device_type": 1 00:12:54.244 }, 00:12:54.244 { 00:12:54.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.244 "dma_device_type": 2 00:12:54.244 }, 00:12:54.244 { 00:12:54.244 "dma_device_id": "system", 00:12:54.244 "dma_device_type": 1 00:12:54.244 }, 00:12:54.244 { 00:12:54.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.244 "dma_device_type": 2 00:12:54.244 } 00:12:54.244 ], 00:12:54.244 "driver_specific": { 00:12:54.244 "raid": { 00:12:54.244 "uuid": "79c6ad59-974c-4c38-ab30-c0f5a606838a", 00:12:54.244 "strip_size_kb": 64, 00:12:54.244 "state": "online", 00:12:54.244 "raid_level": "concat", 00:12:54.244 "superblock": false, 00:12:54.244 "num_base_bdevs": 4, 00:12:54.244 "num_base_bdevs_discovered": 4, 00:12:54.244 "num_base_bdevs_operational": 4, 00:12:54.244 "base_bdevs_list": [ 00:12:54.244 { 00:12:54.244 "name": "BaseBdev1", 00:12:54.244 "uuid": "c0052e9e-9db2-4bad-995a-f07bb5eb4e85", 00:12:54.244 "is_configured": true, 00:12:54.244 "data_offset": 0, 00:12:54.244 "data_size": 65536 00:12:54.244 }, 00:12:54.244 { 00:12:54.244 "name": "BaseBdev2", 00:12:54.244 "uuid": "df7fe5d4-9adb-4138-b37c-f7b993887a9e", 00:12:54.244 "is_configured": true, 00:12:54.244 "data_offset": 0, 00:12:54.244 "data_size": 65536 00:12:54.244 }, 00:12:54.244 { 00:12:54.244 "name": "BaseBdev3", 00:12:54.244 "uuid": "c077be04-d520-4d9f-b9e9-9b34ae310da8", 00:12:54.244 "is_configured": true, 00:12:54.244 "data_offset": 0, 00:12:54.244 "data_size": 65536 00:12:54.244 }, 00:12:54.244 { 00:12:54.244 "name": "BaseBdev4", 00:12:54.244 "uuid": "8a1cf669-4672-429b-953b-ddeeda251fbe", 00:12:54.244 "is_configured": true, 00:12:54.244 "data_offset": 0, 00:12:54.244 "data_size": 65536 00:12:54.244 } 00:12:54.244 ] 00:12:54.244 } 00:12:54.244 } 00:12:54.244 }' 00:12:54.244 16:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:54.244 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:54.244 BaseBdev2 00:12:54.244 BaseBdev3 00:12:54.244 BaseBdev4' 00:12:54.244 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:54.244 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:54.244 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:54.244 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:54.244 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:54.244 16:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.244 16:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.503 [2024-12-06 16:28:36.249730] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:54.503 [2024-12-06 16:28:36.249776] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:54.503 [2024-12-06 16:28:36.249841] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.503 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.503 "name": "Existed_Raid", 00:12:54.503 "uuid": "79c6ad59-974c-4c38-ab30-c0f5a606838a", 00:12:54.503 "strip_size_kb": 64, 00:12:54.503 "state": "offline", 00:12:54.503 "raid_level": "concat", 00:12:54.503 "superblock": false, 00:12:54.503 "num_base_bdevs": 4, 00:12:54.503 "num_base_bdevs_discovered": 3, 00:12:54.503 "num_base_bdevs_operational": 3, 00:12:54.503 "base_bdevs_list": [ 00:12:54.503 { 00:12:54.503 "name": null, 00:12:54.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.503 "is_configured": false, 00:12:54.503 "data_offset": 0, 00:12:54.503 "data_size": 65536 00:12:54.503 }, 00:12:54.503 { 00:12:54.503 "name": "BaseBdev2", 00:12:54.503 "uuid": "df7fe5d4-9adb-4138-b37c-f7b993887a9e", 00:12:54.503 "is_configured": true, 00:12:54.503 "data_offset": 0, 00:12:54.503 "data_size": 65536 00:12:54.503 }, 00:12:54.503 { 00:12:54.503 "name": "BaseBdev3", 00:12:54.503 "uuid": "c077be04-d520-4d9f-b9e9-9b34ae310da8", 00:12:54.504 "is_configured": true, 00:12:54.504 "data_offset": 0, 00:12:54.504 "data_size": 65536 00:12:54.504 }, 00:12:54.504 { 00:12:54.504 "name": "BaseBdev4", 00:12:54.504 "uuid": "8a1cf669-4672-429b-953b-ddeeda251fbe", 00:12:54.504 "is_configured": true, 00:12:54.504 "data_offset": 0, 00:12:54.504 "data_size": 65536 00:12:54.504 } 00:12:54.504 ] 00:12:54.504 }' 00:12:54.504 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.504 16:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.071 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:55.071 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:55.071 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.071 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:55.071 16:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.071 16:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.071 16:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.071 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:55.071 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:55.071 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:55.071 16:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.071 16:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.071 [2024-12-06 16:28:36.784750] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:55.071 16:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.071 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:55.071 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:55.071 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.071 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:55.071 16:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.071 16:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.071 16:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.071 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:55.071 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:55.071 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:55.071 16:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.071 16:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.071 [2024-12-06 16:28:36.852317] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:55.071 16:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.071 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:55.071 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:55.071 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.071 16:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.071 16:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.071 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:55.071 16:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.331 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:55.331 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:55.331 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:55.331 16:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.331 16:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.331 [2024-12-06 16:28:36.923897] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:55.331 [2024-12-06 16:28:36.923958] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:12:55.331 16:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.331 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:55.331 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:55.331 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.331 16:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.331 16:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.331 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:55.331 16:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.331 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:55.331 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:55.331 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:55.331 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:55.331 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:55.331 16:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:55.331 16:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.331 16:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.331 BaseBdev2 00:12:55.331 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.331 16:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:55.331 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:55.331 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:55.331 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:55.331 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:55.331 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:55.331 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:55.331 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.331 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.331 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.331 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:55.331 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.331 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.331 [ 00:12:55.331 { 00:12:55.331 "name": "BaseBdev2", 00:12:55.331 "aliases": [ 00:12:55.331 "37cc6d6b-f8b6-4af8-80a7-bc30f8cdb9bf" 00:12:55.331 ], 00:12:55.331 "product_name": "Malloc disk", 00:12:55.331 "block_size": 512, 00:12:55.331 "num_blocks": 65536, 00:12:55.331 "uuid": "37cc6d6b-f8b6-4af8-80a7-bc30f8cdb9bf", 00:12:55.331 "assigned_rate_limits": { 00:12:55.331 "rw_ios_per_sec": 0, 00:12:55.331 "rw_mbytes_per_sec": 0, 00:12:55.331 "r_mbytes_per_sec": 0, 00:12:55.331 "w_mbytes_per_sec": 0 00:12:55.331 }, 00:12:55.331 "claimed": false, 00:12:55.331 "zoned": false, 00:12:55.331 "supported_io_types": { 00:12:55.331 "read": true, 00:12:55.331 "write": true, 00:12:55.331 "unmap": true, 00:12:55.331 "flush": true, 00:12:55.331 "reset": true, 00:12:55.331 "nvme_admin": false, 00:12:55.331 "nvme_io": false, 00:12:55.331 "nvme_io_md": false, 00:12:55.331 "write_zeroes": true, 00:12:55.331 "zcopy": true, 00:12:55.331 "get_zone_info": false, 00:12:55.331 "zone_management": false, 00:12:55.331 "zone_append": false, 00:12:55.331 "compare": false, 00:12:55.331 "compare_and_write": false, 00:12:55.331 "abort": true, 00:12:55.331 "seek_hole": false, 00:12:55.331 "seek_data": false, 00:12:55.331 "copy": true, 00:12:55.331 "nvme_iov_md": false 00:12:55.331 }, 00:12:55.331 "memory_domains": [ 00:12:55.331 { 00:12:55.331 "dma_device_id": "system", 00:12:55.331 "dma_device_type": 1 00:12:55.331 }, 00:12:55.331 { 00:12:55.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.331 "dma_device_type": 2 00:12:55.331 } 00:12:55.331 ], 00:12:55.331 "driver_specific": {} 00:12:55.331 } 00:12:55.331 ] 00:12:55.331 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.331 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:55.331 16:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:55.331 16:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:55.331 16:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:55.331 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.331 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.331 BaseBdev3 00:12:55.331 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.331 16:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:55.331 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:55.331 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:55.331 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:55.331 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:55.331 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:55.332 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:55.332 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.332 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.332 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.332 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:55.332 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.332 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.332 [ 00:12:55.332 { 00:12:55.332 "name": "BaseBdev3", 00:12:55.332 "aliases": [ 00:12:55.332 "76d3c749-41b5-4db2-8c7f-eae3d020c884" 00:12:55.332 ], 00:12:55.332 "product_name": "Malloc disk", 00:12:55.332 "block_size": 512, 00:12:55.332 "num_blocks": 65536, 00:12:55.332 "uuid": "76d3c749-41b5-4db2-8c7f-eae3d020c884", 00:12:55.332 "assigned_rate_limits": { 00:12:55.332 "rw_ios_per_sec": 0, 00:12:55.332 "rw_mbytes_per_sec": 0, 00:12:55.332 "r_mbytes_per_sec": 0, 00:12:55.332 "w_mbytes_per_sec": 0 00:12:55.332 }, 00:12:55.332 "claimed": false, 00:12:55.332 "zoned": false, 00:12:55.332 "supported_io_types": { 00:12:55.332 "read": true, 00:12:55.332 "write": true, 00:12:55.332 "unmap": true, 00:12:55.332 "flush": true, 00:12:55.332 "reset": true, 00:12:55.332 "nvme_admin": false, 00:12:55.332 "nvme_io": false, 00:12:55.332 "nvme_io_md": false, 00:12:55.332 "write_zeroes": true, 00:12:55.332 "zcopy": true, 00:12:55.332 "get_zone_info": false, 00:12:55.332 "zone_management": false, 00:12:55.332 "zone_append": false, 00:12:55.332 "compare": false, 00:12:55.332 "compare_and_write": false, 00:12:55.332 "abort": true, 00:12:55.332 "seek_hole": false, 00:12:55.332 "seek_data": false, 00:12:55.332 "copy": true, 00:12:55.332 "nvme_iov_md": false 00:12:55.332 }, 00:12:55.332 "memory_domains": [ 00:12:55.332 { 00:12:55.332 "dma_device_id": "system", 00:12:55.332 "dma_device_type": 1 00:12:55.332 }, 00:12:55.332 { 00:12:55.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.332 "dma_device_type": 2 00:12:55.332 } 00:12:55.332 ], 00:12:55.332 "driver_specific": {} 00:12:55.332 } 00:12:55.332 ] 00:12:55.332 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.332 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:55.332 16:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:55.332 16:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:55.332 16:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:55.332 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.332 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.332 BaseBdev4 00:12:55.332 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.332 16:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:55.332 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:55.332 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:55.332 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:55.332 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:55.332 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:55.332 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:55.332 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.332 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.332 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.332 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:55.332 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.332 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.332 [ 00:12:55.332 { 00:12:55.332 "name": "BaseBdev4", 00:12:55.332 "aliases": [ 00:12:55.332 "3f8b22d3-c733-4c38-afa5-30b3fcd488c9" 00:12:55.332 ], 00:12:55.332 "product_name": "Malloc disk", 00:12:55.332 "block_size": 512, 00:12:55.332 "num_blocks": 65536, 00:12:55.332 "uuid": "3f8b22d3-c733-4c38-afa5-30b3fcd488c9", 00:12:55.332 "assigned_rate_limits": { 00:12:55.332 "rw_ios_per_sec": 0, 00:12:55.332 "rw_mbytes_per_sec": 0, 00:12:55.332 "r_mbytes_per_sec": 0, 00:12:55.332 "w_mbytes_per_sec": 0 00:12:55.332 }, 00:12:55.332 "claimed": false, 00:12:55.332 "zoned": false, 00:12:55.332 "supported_io_types": { 00:12:55.332 "read": true, 00:12:55.332 "write": true, 00:12:55.332 "unmap": true, 00:12:55.332 "flush": true, 00:12:55.332 "reset": true, 00:12:55.332 "nvme_admin": false, 00:12:55.332 "nvme_io": false, 00:12:55.332 "nvme_io_md": false, 00:12:55.332 "write_zeroes": true, 00:12:55.332 "zcopy": true, 00:12:55.332 "get_zone_info": false, 00:12:55.332 "zone_management": false, 00:12:55.332 "zone_append": false, 00:12:55.332 "compare": false, 00:12:55.332 "compare_and_write": false, 00:12:55.332 "abort": true, 00:12:55.332 "seek_hole": false, 00:12:55.332 "seek_data": false, 00:12:55.332 "copy": true, 00:12:55.332 "nvme_iov_md": false 00:12:55.332 }, 00:12:55.332 "memory_domains": [ 00:12:55.332 { 00:12:55.332 "dma_device_id": "system", 00:12:55.332 "dma_device_type": 1 00:12:55.332 }, 00:12:55.332 { 00:12:55.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.332 "dma_device_type": 2 00:12:55.332 } 00:12:55.332 ], 00:12:55.332 "driver_specific": {} 00:12:55.332 } 00:12:55.332 ] 00:12:55.332 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.332 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:55.332 16:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:55.332 16:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:55.332 16:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:55.332 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.332 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.332 [2024-12-06 16:28:37.155080] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:55.332 [2024-12-06 16:28:37.155127] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:55.332 [2024-12-06 16:28:37.155153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:55.332 [2024-12-06 16:28:37.157245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:55.332 [2024-12-06 16:28:37.157306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:55.332 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.332 16:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:55.332 16:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:55.332 16:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:55.332 16:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:55.332 16:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:55.332 16:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:55.332 16:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.332 16:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.332 16:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.332 16:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.332 16:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.332 16:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.332 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.592 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.592 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.592 16:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.592 "name": "Existed_Raid", 00:12:55.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.592 "strip_size_kb": 64, 00:12:55.592 "state": "configuring", 00:12:55.592 "raid_level": "concat", 00:12:55.592 "superblock": false, 00:12:55.592 "num_base_bdevs": 4, 00:12:55.592 "num_base_bdevs_discovered": 3, 00:12:55.592 "num_base_bdevs_operational": 4, 00:12:55.592 "base_bdevs_list": [ 00:12:55.592 { 00:12:55.592 "name": "BaseBdev1", 00:12:55.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.592 "is_configured": false, 00:12:55.592 "data_offset": 0, 00:12:55.592 "data_size": 0 00:12:55.592 }, 00:12:55.592 { 00:12:55.592 "name": "BaseBdev2", 00:12:55.592 "uuid": "37cc6d6b-f8b6-4af8-80a7-bc30f8cdb9bf", 00:12:55.592 "is_configured": true, 00:12:55.592 "data_offset": 0, 00:12:55.592 "data_size": 65536 00:12:55.592 }, 00:12:55.592 { 00:12:55.592 "name": "BaseBdev3", 00:12:55.592 "uuid": "76d3c749-41b5-4db2-8c7f-eae3d020c884", 00:12:55.592 "is_configured": true, 00:12:55.592 "data_offset": 0, 00:12:55.592 "data_size": 65536 00:12:55.592 }, 00:12:55.592 { 00:12:55.592 "name": "BaseBdev4", 00:12:55.592 "uuid": "3f8b22d3-c733-4c38-afa5-30b3fcd488c9", 00:12:55.592 "is_configured": true, 00:12:55.592 "data_offset": 0, 00:12:55.592 "data_size": 65536 00:12:55.592 } 00:12:55.592 ] 00:12:55.592 }' 00:12:55.592 16:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.592 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.852 16:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:55.852 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.852 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.852 [2024-12-06 16:28:37.642298] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:55.852 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.852 16:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:55.852 16:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:55.852 16:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:55.852 16:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:55.852 16:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:55.852 16:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:55.852 16:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.852 16:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.852 16:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.852 16:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.852 16:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.852 16:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.852 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.852 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.852 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.111 16:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.111 "name": "Existed_Raid", 00:12:56.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.111 "strip_size_kb": 64, 00:12:56.111 "state": "configuring", 00:12:56.111 "raid_level": "concat", 00:12:56.111 "superblock": false, 00:12:56.111 "num_base_bdevs": 4, 00:12:56.111 "num_base_bdevs_discovered": 2, 00:12:56.111 "num_base_bdevs_operational": 4, 00:12:56.111 "base_bdevs_list": [ 00:12:56.111 { 00:12:56.111 "name": "BaseBdev1", 00:12:56.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.111 "is_configured": false, 00:12:56.111 "data_offset": 0, 00:12:56.111 "data_size": 0 00:12:56.111 }, 00:12:56.111 { 00:12:56.111 "name": null, 00:12:56.111 "uuid": "37cc6d6b-f8b6-4af8-80a7-bc30f8cdb9bf", 00:12:56.111 "is_configured": false, 00:12:56.111 "data_offset": 0, 00:12:56.111 "data_size": 65536 00:12:56.111 }, 00:12:56.111 { 00:12:56.111 "name": "BaseBdev3", 00:12:56.111 "uuid": "76d3c749-41b5-4db2-8c7f-eae3d020c884", 00:12:56.111 "is_configured": true, 00:12:56.111 "data_offset": 0, 00:12:56.111 "data_size": 65536 00:12:56.111 }, 00:12:56.111 { 00:12:56.111 "name": "BaseBdev4", 00:12:56.111 "uuid": "3f8b22d3-c733-4c38-afa5-30b3fcd488c9", 00:12:56.111 "is_configured": true, 00:12:56.111 "data_offset": 0, 00:12:56.111 "data_size": 65536 00:12:56.111 } 00:12:56.111 ] 00:12:56.111 }' 00:12:56.111 16:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.111 16:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.371 16:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.371 16:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.371 16:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:56.371 16:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.371 16:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.371 16:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:56.371 16:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:56.371 16:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.371 16:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.371 [2024-12-06 16:28:38.132487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:56.371 BaseBdev1 00:12:56.371 16:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.371 16:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:56.371 16:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:56.371 16:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:56.372 16:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:56.372 16:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:56.372 16:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:56.372 16:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:56.372 16:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.372 16:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.372 16:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.372 16:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:56.372 16:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.372 16:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.372 [ 00:12:56.372 { 00:12:56.372 "name": "BaseBdev1", 00:12:56.372 "aliases": [ 00:12:56.372 "454826dc-29f1-4d58-8618-34f8de8177fe" 00:12:56.372 ], 00:12:56.372 "product_name": "Malloc disk", 00:12:56.372 "block_size": 512, 00:12:56.372 "num_blocks": 65536, 00:12:56.372 "uuid": "454826dc-29f1-4d58-8618-34f8de8177fe", 00:12:56.372 "assigned_rate_limits": { 00:12:56.372 "rw_ios_per_sec": 0, 00:12:56.372 "rw_mbytes_per_sec": 0, 00:12:56.372 "r_mbytes_per_sec": 0, 00:12:56.372 "w_mbytes_per_sec": 0 00:12:56.372 }, 00:12:56.372 "claimed": true, 00:12:56.372 "claim_type": "exclusive_write", 00:12:56.372 "zoned": false, 00:12:56.372 "supported_io_types": { 00:12:56.372 "read": true, 00:12:56.372 "write": true, 00:12:56.372 "unmap": true, 00:12:56.372 "flush": true, 00:12:56.372 "reset": true, 00:12:56.372 "nvme_admin": false, 00:12:56.372 "nvme_io": false, 00:12:56.372 "nvme_io_md": false, 00:12:56.372 "write_zeroes": true, 00:12:56.372 "zcopy": true, 00:12:56.372 "get_zone_info": false, 00:12:56.372 "zone_management": false, 00:12:56.372 "zone_append": false, 00:12:56.372 "compare": false, 00:12:56.372 "compare_and_write": false, 00:12:56.372 "abort": true, 00:12:56.372 "seek_hole": false, 00:12:56.372 "seek_data": false, 00:12:56.372 "copy": true, 00:12:56.372 "nvme_iov_md": false 00:12:56.372 }, 00:12:56.372 "memory_domains": [ 00:12:56.372 { 00:12:56.372 "dma_device_id": "system", 00:12:56.372 "dma_device_type": 1 00:12:56.372 }, 00:12:56.372 { 00:12:56.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.372 "dma_device_type": 2 00:12:56.372 } 00:12:56.372 ], 00:12:56.372 "driver_specific": {} 00:12:56.372 } 00:12:56.372 ] 00:12:56.372 16:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.372 16:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:56.372 16:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:56.372 16:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:56.372 16:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:56.372 16:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:56.372 16:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:56.372 16:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:56.372 16:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.372 16:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.372 16:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.372 16:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.372 16:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.372 16:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.372 16:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.372 16:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.372 16:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.372 16:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.372 "name": "Existed_Raid", 00:12:56.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.372 "strip_size_kb": 64, 00:12:56.372 "state": "configuring", 00:12:56.372 "raid_level": "concat", 00:12:56.372 "superblock": false, 00:12:56.372 "num_base_bdevs": 4, 00:12:56.372 "num_base_bdevs_discovered": 3, 00:12:56.372 "num_base_bdevs_operational": 4, 00:12:56.372 "base_bdevs_list": [ 00:12:56.372 { 00:12:56.372 "name": "BaseBdev1", 00:12:56.372 "uuid": "454826dc-29f1-4d58-8618-34f8de8177fe", 00:12:56.372 "is_configured": true, 00:12:56.372 "data_offset": 0, 00:12:56.372 "data_size": 65536 00:12:56.372 }, 00:12:56.372 { 00:12:56.372 "name": null, 00:12:56.372 "uuid": "37cc6d6b-f8b6-4af8-80a7-bc30f8cdb9bf", 00:12:56.372 "is_configured": false, 00:12:56.372 "data_offset": 0, 00:12:56.372 "data_size": 65536 00:12:56.372 }, 00:12:56.372 { 00:12:56.372 "name": "BaseBdev3", 00:12:56.372 "uuid": "76d3c749-41b5-4db2-8c7f-eae3d020c884", 00:12:56.372 "is_configured": true, 00:12:56.372 "data_offset": 0, 00:12:56.372 "data_size": 65536 00:12:56.372 }, 00:12:56.372 { 00:12:56.372 "name": "BaseBdev4", 00:12:56.372 "uuid": "3f8b22d3-c733-4c38-afa5-30b3fcd488c9", 00:12:56.372 "is_configured": true, 00:12:56.372 "data_offset": 0, 00:12:56.372 "data_size": 65536 00:12:56.372 } 00:12:56.372 ] 00:12:56.372 }' 00:12:56.372 16:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.372 16:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.943 16:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:56.943 16:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.943 16:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.943 16:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.943 16:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.943 16:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:56.943 16:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:56.943 16:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.943 16:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.943 [2024-12-06 16:28:38.639697] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:56.943 16:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.943 16:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:56.943 16:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:56.943 16:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:56.943 16:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:56.943 16:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:56.943 16:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:56.943 16:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.943 16:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.943 16:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.943 16:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.943 16:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.943 16:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.943 16:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.943 16:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.943 16:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.943 16:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.943 "name": "Existed_Raid", 00:12:56.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.943 "strip_size_kb": 64, 00:12:56.943 "state": "configuring", 00:12:56.943 "raid_level": "concat", 00:12:56.943 "superblock": false, 00:12:56.943 "num_base_bdevs": 4, 00:12:56.943 "num_base_bdevs_discovered": 2, 00:12:56.943 "num_base_bdevs_operational": 4, 00:12:56.943 "base_bdevs_list": [ 00:12:56.943 { 00:12:56.943 "name": "BaseBdev1", 00:12:56.943 "uuid": "454826dc-29f1-4d58-8618-34f8de8177fe", 00:12:56.943 "is_configured": true, 00:12:56.943 "data_offset": 0, 00:12:56.943 "data_size": 65536 00:12:56.943 }, 00:12:56.943 { 00:12:56.943 "name": null, 00:12:56.943 "uuid": "37cc6d6b-f8b6-4af8-80a7-bc30f8cdb9bf", 00:12:56.943 "is_configured": false, 00:12:56.943 "data_offset": 0, 00:12:56.943 "data_size": 65536 00:12:56.943 }, 00:12:56.943 { 00:12:56.943 "name": null, 00:12:56.943 "uuid": "76d3c749-41b5-4db2-8c7f-eae3d020c884", 00:12:56.943 "is_configured": false, 00:12:56.943 "data_offset": 0, 00:12:56.943 "data_size": 65536 00:12:56.943 }, 00:12:56.943 { 00:12:56.943 "name": "BaseBdev4", 00:12:56.943 "uuid": "3f8b22d3-c733-4c38-afa5-30b3fcd488c9", 00:12:56.943 "is_configured": true, 00:12:56.943 "data_offset": 0, 00:12:56.943 "data_size": 65536 00:12:56.943 } 00:12:56.943 ] 00:12:56.943 }' 00:12:56.943 16:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.943 16:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.614 16:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.614 16:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.614 16:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:57.614 16:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.614 16:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.614 16:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:57.614 16:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:57.614 16:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.614 16:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.615 [2024-12-06 16:28:39.130937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:57.615 16:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.615 16:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:57.615 16:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:57.615 16:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:57.615 16:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:57.615 16:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:57.615 16:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:57.615 16:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.615 16:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.615 16:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.615 16:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.615 16:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.615 16:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.615 16:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.615 16:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:57.615 16:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.615 16:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.615 "name": "Existed_Raid", 00:12:57.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.615 "strip_size_kb": 64, 00:12:57.615 "state": "configuring", 00:12:57.615 "raid_level": "concat", 00:12:57.615 "superblock": false, 00:12:57.615 "num_base_bdevs": 4, 00:12:57.615 "num_base_bdevs_discovered": 3, 00:12:57.615 "num_base_bdevs_operational": 4, 00:12:57.615 "base_bdevs_list": [ 00:12:57.615 { 00:12:57.615 "name": "BaseBdev1", 00:12:57.615 "uuid": "454826dc-29f1-4d58-8618-34f8de8177fe", 00:12:57.615 "is_configured": true, 00:12:57.615 "data_offset": 0, 00:12:57.615 "data_size": 65536 00:12:57.615 }, 00:12:57.615 { 00:12:57.615 "name": null, 00:12:57.615 "uuid": "37cc6d6b-f8b6-4af8-80a7-bc30f8cdb9bf", 00:12:57.615 "is_configured": false, 00:12:57.615 "data_offset": 0, 00:12:57.615 "data_size": 65536 00:12:57.615 }, 00:12:57.615 { 00:12:57.615 "name": "BaseBdev3", 00:12:57.615 "uuid": "76d3c749-41b5-4db2-8c7f-eae3d020c884", 00:12:57.615 "is_configured": true, 00:12:57.615 "data_offset": 0, 00:12:57.615 "data_size": 65536 00:12:57.615 }, 00:12:57.615 { 00:12:57.615 "name": "BaseBdev4", 00:12:57.615 "uuid": "3f8b22d3-c733-4c38-afa5-30b3fcd488c9", 00:12:57.615 "is_configured": true, 00:12:57.615 "data_offset": 0, 00:12:57.615 "data_size": 65536 00:12:57.615 } 00:12:57.615 ] 00:12:57.615 }' 00:12:57.615 16:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.615 16:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.897 16:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:57.897 16:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.897 16:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.897 16:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.897 16:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.897 16:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:57.897 16:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:57.897 16:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.897 16:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.897 [2024-12-06 16:28:39.590154] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:57.897 16:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.897 16:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:57.897 16:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:57.897 16:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:57.897 16:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:57.897 16:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:57.897 16:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:57.897 16:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.897 16:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.897 16:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.897 16:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.897 16:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:57.897 16:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.897 16:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.897 16:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.897 16:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.897 16:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.897 "name": "Existed_Raid", 00:12:57.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.897 "strip_size_kb": 64, 00:12:57.897 "state": "configuring", 00:12:57.897 "raid_level": "concat", 00:12:57.897 "superblock": false, 00:12:57.897 "num_base_bdevs": 4, 00:12:57.897 "num_base_bdevs_discovered": 2, 00:12:57.897 "num_base_bdevs_operational": 4, 00:12:57.897 "base_bdevs_list": [ 00:12:57.897 { 00:12:57.897 "name": null, 00:12:57.897 "uuid": "454826dc-29f1-4d58-8618-34f8de8177fe", 00:12:57.897 "is_configured": false, 00:12:57.897 "data_offset": 0, 00:12:57.897 "data_size": 65536 00:12:57.897 }, 00:12:57.897 { 00:12:57.897 "name": null, 00:12:57.897 "uuid": "37cc6d6b-f8b6-4af8-80a7-bc30f8cdb9bf", 00:12:57.897 "is_configured": false, 00:12:57.897 "data_offset": 0, 00:12:57.897 "data_size": 65536 00:12:57.897 }, 00:12:57.897 { 00:12:57.897 "name": "BaseBdev3", 00:12:57.897 "uuid": "76d3c749-41b5-4db2-8c7f-eae3d020c884", 00:12:57.897 "is_configured": true, 00:12:57.897 "data_offset": 0, 00:12:57.897 "data_size": 65536 00:12:57.897 }, 00:12:57.897 { 00:12:57.897 "name": "BaseBdev4", 00:12:57.897 "uuid": "3f8b22d3-c733-4c38-afa5-30b3fcd488c9", 00:12:57.897 "is_configured": true, 00:12:57.897 "data_offset": 0, 00:12:57.897 "data_size": 65536 00:12:57.897 } 00:12:57.897 ] 00:12:57.897 }' 00:12:57.897 16:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.897 16:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.464 16:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.464 16:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:58.464 16:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.464 16:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.464 16:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.464 16:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:58.464 16:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:58.464 16:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.464 16:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.464 [2024-12-06 16:28:40.104155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:58.464 16:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.464 16:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:58.465 16:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:58.465 16:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:58.465 16:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:58.465 16:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:58.465 16:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:58.465 16:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.465 16:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.465 16:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.465 16:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.465 16:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.465 16:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:58.465 16:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.465 16:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.465 16:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.465 16:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.465 "name": "Existed_Raid", 00:12:58.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.465 "strip_size_kb": 64, 00:12:58.465 "state": "configuring", 00:12:58.465 "raid_level": "concat", 00:12:58.465 "superblock": false, 00:12:58.465 "num_base_bdevs": 4, 00:12:58.465 "num_base_bdevs_discovered": 3, 00:12:58.465 "num_base_bdevs_operational": 4, 00:12:58.465 "base_bdevs_list": [ 00:12:58.465 { 00:12:58.465 "name": null, 00:12:58.465 "uuid": "454826dc-29f1-4d58-8618-34f8de8177fe", 00:12:58.465 "is_configured": false, 00:12:58.465 "data_offset": 0, 00:12:58.465 "data_size": 65536 00:12:58.465 }, 00:12:58.465 { 00:12:58.465 "name": "BaseBdev2", 00:12:58.465 "uuid": "37cc6d6b-f8b6-4af8-80a7-bc30f8cdb9bf", 00:12:58.465 "is_configured": true, 00:12:58.465 "data_offset": 0, 00:12:58.465 "data_size": 65536 00:12:58.465 }, 00:12:58.465 { 00:12:58.465 "name": "BaseBdev3", 00:12:58.465 "uuid": "76d3c749-41b5-4db2-8c7f-eae3d020c884", 00:12:58.465 "is_configured": true, 00:12:58.465 "data_offset": 0, 00:12:58.465 "data_size": 65536 00:12:58.465 }, 00:12:58.465 { 00:12:58.465 "name": "BaseBdev4", 00:12:58.465 "uuid": "3f8b22d3-c733-4c38-afa5-30b3fcd488c9", 00:12:58.465 "is_configured": true, 00:12:58.465 "data_offset": 0, 00:12:58.465 "data_size": 65536 00:12:58.465 } 00:12:58.465 ] 00:12:58.465 }' 00:12:58.465 16:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.465 16:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.724 16:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:58.724 16:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.724 16:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.724 16:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.724 16:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.984 16:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:58.984 16:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.984 16:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.984 16:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.984 16:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:58.984 16:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.984 16:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 454826dc-29f1-4d58-8618-34f8de8177fe 00:12:58.984 16:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.984 16:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.984 [2024-12-06 16:28:40.634572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:58.984 [2024-12-06 16:28:40.634620] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:12:58.984 [2024-12-06 16:28:40.634628] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:58.984 [2024-12-06 16:28:40.634911] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:58.984 [2024-12-06 16:28:40.635025] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:12:58.984 [2024-12-06 16:28:40.635036] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:12:58.984 [2024-12-06 16:28:40.635230] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:58.984 NewBaseBdev 00:12:58.984 16:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.984 16:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:58.984 16:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:58.984 16:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:58.984 16:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:58.984 16:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:58.984 16:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:58.984 16:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:58.984 16:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.984 16:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.984 16:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.984 16:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:58.984 16:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.984 16:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.984 [ 00:12:58.984 { 00:12:58.984 "name": "NewBaseBdev", 00:12:58.984 "aliases": [ 00:12:58.984 "454826dc-29f1-4d58-8618-34f8de8177fe" 00:12:58.984 ], 00:12:58.984 "product_name": "Malloc disk", 00:12:58.984 "block_size": 512, 00:12:58.984 "num_blocks": 65536, 00:12:58.984 "uuid": "454826dc-29f1-4d58-8618-34f8de8177fe", 00:12:58.984 "assigned_rate_limits": { 00:12:58.984 "rw_ios_per_sec": 0, 00:12:58.984 "rw_mbytes_per_sec": 0, 00:12:58.984 "r_mbytes_per_sec": 0, 00:12:58.984 "w_mbytes_per_sec": 0 00:12:58.984 }, 00:12:58.984 "claimed": true, 00:12:58.984 "claim_type": "exclusive_write", 00:12:58.984 "zoned": false, 00:12:58.984 "supported_io_types": { 00:12:58.984 "read": true, 00:12:58.984 "write": true, 00:12:58.984 "unmap": true, 00:12:58.984 "flush": true, 00:12:58.984 "reset": true, 00:12:58.984 "nvme_admin": false, 00:12:58.984 "nvme_io": false, 00:12:58.984 "nvme_io_md": false, 00:12:58.984 "write_zeroes": true, 00:12:58.984 "zcopy": true, 00:12:58.984 "get_zone_info": false, 00:12:58.984 "zone_management": false, 00:12:58.984 "zone_append": false, 00:12:58.984 "compare": false, 00:12:58.984 "compare_and_write": false, 00:12:58.984 "abort": true, 00:12:58.984 "seek_hole": false, 00:12:58.984 "seek_data": false, 00:12:58.984 "copy": true, 00:12:58.984 "nvme_iov_md": false 00:12:58.984 }, 00:12:58.984 "memory_domains": [ 00:12:58.984 { 00:12:58.984 "dma_device_id": "system", 00:12:58.984 "dma_device_type": 1 00:12:58.984 }, 00:12:58.984 { 00:12:58.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.984 "dma_device_type": 2 00:12:58.984 } 00:12:58.984 ], 00:12:58.984 "driver_specific": {} 00:12:58.984 } 00:12:58.984 ] 00:12:58.984 16:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.984 16:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:58.984 16:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:58.984 16:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:58.984 16:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:58.984 16:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:58.984 16:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:58.984 16:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:58.984 16:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.984 16:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.984 16:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.984 16:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.984 16:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.984 16:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:58.984 16:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.984 16:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.984 16:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.984 16:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.984 "name": "Existed_Raid", 00:12:58.984 "uuid": "d1f86536-53f2-4c1b-9497-645ffc982e73", 00:12:58.984 "strip_size_kb": 64, 00:12:58.984 "state": "online", 00:12:58.984 "raid_level": "concat", 00:12:58.984 "superblock": false, 00:12:58.984 "num_base_bdevs": 4, 00:12:58.984 "num_base_bdevs_discovered": 4, 00:12:58.984 "num_base_bdevs_operational": 4, 00:12:58.984 "base_bdevs_list": [ 00:12:58.984 { 00:12:58.984 "name": "NewBaseBdev", 00:12:58.984 "uuid": "454826dc-29f1-4d58-8618-34f8de8177fe", 00:12:58.984 "is_configured": true, 00:12:58.984 "data_offset": 0, 00:12:58.984 "data_size": 65536 00:12:58.985 }, 00:12:58.985 { 00:12:58.985 "name": "BaseBdev2", 00:12:58.985 "uuid": "37cc6d6b-f8b6-4af8-80a7-bc30f8cdb9bf", 00:12:58.985 "is_configured": true, 00:12:58.985 "data_offset": 0, 00:12:58.985 "data_size": 65536 00:12:58.985 }, 00:12:58.985 { 00:12:58.985 "name": "BaseBdev3", 00:12:58.985 "uuid": "76d3c749-41b5-4db2-8c7f-eae3d020c884", 00:12:58.985 "is_configured": true, 00:12:58.985 "data_offset": 0, 00:12:58.985 "data_size": 65536 00:12:58.985 }, 00:12:58.985 { 00:12:58.985 "name": "BaseBdev4", 00:12:58.985 "uuid": "3f8b22d3-c733-4c38-afa5-30b3fcd488c9", 00:12:58.985 "is_configured": true, 00:12:58.985 "data_offset": 0, 00:12:58.985 "data_size": 65536 00:12:58.985 } 00:12:58.985 ] 00:12:58.985 }' 00:12:58.985 16:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.985 16:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.553 16:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:59.553 16:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:59.553 16:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:59.553 16:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:59.553 16:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:59.553 16:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:59.553 16:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:59.553 16:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:59.553 16:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.553 16:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.553 [2024-12-06 16:28:41.170114] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:59.553 16:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.553 16:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:59.553 "name": "Existed_Raid", 00:12:59.553 "aliases": [ 00:12:59.553 "d1f86536-53f2-4c1b-9497-645ffc982e73" 00:12:59.553 ], 00:12:59.553 "product_name": "Raid Volume", 00:12:59.553 "block_size": 512, 00:12:59.553 "num_blocks": 262144, 00:12:59.553 "uuid": "d1f86536-53f2-4c1b-9497-645ffc982e73", 00:12:59.553 "assigned_rate_limits": { 00:12:59.553 "rw_ios_per_sec": 0, 00:12:59.553 "rw_mbytes_per_sec": 0, 00:12:59.553 "r_mbytes_per_sec": 0, 00:12:59.553 "w_mbytes_per_sec": 0 00:12:59.553 }, 00:12:59.553 "claimed": false, 00:12:59.553 "zoned": false, 00:12:59.553 "supported_io_types": { 00:12:59.553 "read": true, 00:12:59.553 "write": true, 00:12:59.553 "unmap": true, 00:12:59.553 "flush": true, 00:12:59.553 "reset": true, 00:12:59.553 "nvme_admin": false, 00:12:59.553 "nvme_io": false, 00:12:59.553 "nvme_io_md": false, 00:12:59.553 "write_zeroes": true, 00:12:59.553 "zcopy": false, 00:12:59.553 "get_zone_info": false, 00:12:59.553 "zone_management": false, 00:12:59.553 "zone_append": false, 00:12:59.553 "compare": false, 00:12:59.553 "compare_and_write": false, 00:12:59.553 "abort": false, 00:12:59.553 "seek_hole": false, 00:12:59.553 "seek_data": false, 00:12:59.553 "copy": false, 00:12:59.553 "nvme_iov_md": false 00:12:59.553 }, 00:12:59.553 "memory_domains": [ 00:12:59.553 { 00:12:59.553 "dma_device_id": "system", 00:12:59.553 "dma_device_type": 1 00:12:59.553 }, 00:12:59.553 { 00:12:59.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.553 "dma_device_type": 2 00:12:59.553 }, 00:12:59.553 { 00:12:59.553 "dma_device_id": "system", 00:12:59.553 "dma_device_type": 1 00:12:59.553 }, 00:12:59.553 { 00:12:59.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.553 "dma_device_type": 2 00:12:59.553 }, 00:12:59.553 { 00:12:59.553 "dma_device_id": "system", 00:12:59.553 "dma_device_type": 1 00:12:59.553 }, 00:12:59.553 { 00:12:59.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.553 "dma_device_type": 2 00:12:59.553 }, 00:12:59.553 { 00:12:59.553 "dma_device_id": "system", 00:12:59.553 "dma_device_type": 1 00:12:59.553 }, 00:12:59.553 { 00:12:59.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.553 "dma_device_type": 2 00:12:59.553 } 00:12:59.553 ], 00:12:59.553 "driver_specific": { 00:12:59.553 "raid": { 00:12:59.553 "uuid": "d1f86536-53f2-4c1b-9497-645ffc982e73", 00:12:59.553 "strip_size_kb": 64, 00:12:59.553 "state": "online", 00:12:59.553 "raid_level": "concat", 00:12:59.553 "superblock": false, 00:12:59.553 "num_base_bdevs": 4, 00:12:59.553 "num_base_bdevs_discovered": 4, 00:12:59.553 "num_base_bdevs_operational": 4, 00:12:59.553 "base_bdevs_list": [ 00:12:59.553 { 00:12:59.553 "name": "NewBaseBdev", 00:12:59.553 "uuid": "454826dc-29f1-4d58-8618-34f8de8177fe", 00:12:59.553 "is_configured": true, 00:12:59.553 "data_offset": 0, 00:12:59.553 "data_size": 65536 00:12:59.553 }, 00:12:59.553 { 00:12:59.553 "name": "BaseBdev2", 00:12:59.553 "uuid": "37cc6d6b-f8b6-4af8-80a7-bc30f8cdb9bf", 00:12:59.553 "is_configured": true, 00:12:59.553 "data_offset": 0, 00:12:59.553 "data_size": 65536 00:12:59.553 }, 00:12:59.553 { 00:12:59.553 "name": "BaseBdev3", 00:12:59.553 "uuid": "76d3c749-41b5-4db2-8c7f-eae3d020c884", 00:12:59.553 "is_configured": true, 00:12:59.553 "data_offset": 0, 00:12:59.553 "data_size": 65536 00:12:59.553 }, 00:12:59.553 { 00:12:59.553 "name": "BaseBdev4", 00:12:59.553 "uuid": "3f8b22d3-c733-4c38-afa5-30b3fcd488c9", 00:12:59.553 "is_configured": true, 00:12:59.553 "data_offset": 0, 00:12:59.553 "data_size": 65536 00:12:59.553 } 00:12:59.553 ] 00:12:59.553 } 00:12:59.553 } 00:12:59.553 }' 00:12:59.553 16:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:59.553 16:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:59.553 BaseBdev2 00:12:59.553 BaseBdev3 00:12:59.553 BaseBdev4' 00:12:59.553 16:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.553 16:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:59.553 16:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:59.553 16:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:59.553 16:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.553 16:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.553 16:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.553 16:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.553 16:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:59.553 16:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:59.553 16:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:59.553 16:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:59.553 16:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.553 16:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.554 16:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.554 16:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.813 16:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:59.813 16:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:59.813 16:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:59.813 16:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:59.813 16:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.813 16:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.813 16:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.813 16:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.813 16:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:59.813 16:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:59.813 16:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:59.813 16:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:59.813 16:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.813 16:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.813 16:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.813 16:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.813 16:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:59.813 16:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:59.813 16:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:59.813 16:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.813 16:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.813 [2024-12-06 16:28:41.529137] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:59.814 [2024-12-06 16:28:41.529172] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:59.814 [2024-12-06 16:28:41.529273] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:59.814 [2024-12-06 16:28:41.529348] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:59.814 [2024-12-06 16:28:41.529359] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:12:59.814 16:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.814 16:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82566 00:12:59.814 16:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 82566 ']' 00:12:59.814 16:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 82566 00:12:59.814 16:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:59.814 16:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:59.814 16:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82566 00:12:59.814 16:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:59.814 16:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:59.814 16:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82566' 00:12:59.814 killing process with pid 82566 00:12:59.814 16:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 82566 00:12:59.814 16:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 82566 00:12:59.814 [2024-12-06 16:28:41.579517] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:59.814 [2024-12-06 16:28:41.622080] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:00.074 16:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:00.074 00:13:00.074 real 0m9.692s 00:13:00.074 user 0m16.623s 00:13:00.074 sys 0m2.071s 00:13:00.074 ************************************ 00:13:00.074 END TEST raid_state_function_test 00:13:00.074 ************************************ 00:13:00.074 16:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:00.074 16:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.074 16:28:41 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:13:00.074 16:28:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:00.074 16:28:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:00.074 16:28:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:00.334 ************************************ 00:13:00.334 START TEST raid_state_function_test_sb 00:13:00.334 ************************************ 00:13:00.334 16:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:13:00.334 16:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:13:00.334 16:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:00.334 16:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:00.335 16:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:00.335 16:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:00.335 16:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:00.335 16:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:00.335 16:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:00.335 16:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:00.335 16:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:00.335 16:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:00.335 16:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:00.335 16:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:00.335 16:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:00.335 16:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:00.335 16:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:00.335 16:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:00.335 16:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:00.335 16:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:00.335 16:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:00.335 16:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:00.335 16:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:00.335 16:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:00.335 16:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:00.335 16:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:13:00.335 16:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:00.335 16:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:00.335 16:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:00.335 16:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:00.335 16:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83221 00:13:00.335 16:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:00.335 16:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83221' 00:13:00.335 Process raid pid: 83221 00:13:00.335 16:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83221 00:13:00.335 16:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83221 ']' 00:13:00.335 16:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.335 16:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:00.335 16:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.335 16:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:00.335 16:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.335 [2024-12-06 16:28:42.010578] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:13:00.335 [2024-12-06 16:28:42.010811] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:00.594 [2024-12-06 16:28:42.185563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.594 [2024-12-06 16:28:42.214137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.594 [2024-12-06 16:28:42.257071] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:00.594 [2024-12-06 16:28:42.257187] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:01.165 16:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:01.165 16:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:01.165 16:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:01.165 16:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.165 16:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.165 [2024-12-06 16:28:42.887772] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:01.165 [2024-12-06 16:28:42.887894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:01.165 [2024-12-06 16:28:42.887944] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:01.165 [2024-12-06 16:28:42.887973] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:01.165 [2024-12-06 16:28:42.887995] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:01.165 [2024-12-06 16:28:42.888022] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:01.165 [2024-12-06 16:28:42.888042] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:01.165 [2024-12-06 16:28:42.888076] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:01.165 16:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.165 16:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:01.165 16:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:01.165 16:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:01.165 16:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:01.165 16:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:01.165 16:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:01.165 16:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.165 16:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.165 16:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.165 16:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.165 16:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.165 16:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.165 16:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:01.165 16:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.165 16:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.165 16:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.165 "name": "Existed_Raid", 00:13:01.165 "uuid": "c1f05fad-d5a7-44e4-90ae-6dcba7c6111c", 00:13:01.165 "strip_size_kb": 64, 00:13:01.165 "state": "configuring", 00:13:01.165 "raid_level": "concat", 00:13:01.165 "superblock": true, 00:13:01.165 "num_base_bdevs": 4, 00:13:01.165 "num_base_bdevs_discovered": 0, 00:13:01.165 "num_base_bdevs_operational": 4, 00:13:01.165 "base_bdevs_list": [ 00:13:01.165 { 00:13:01.165 "name": "BaseBdev1", 00:13:01.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.165 "is_configured": false, 00:13:01.165 "data_offset": 0, 00:13:01.165 "data_size": 0 00:13:01.165 }, 00:13:01.165 { 00:13:01.165 "name": "BaseBdev2", 00:13:01.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.165 "is_configured": false, 00:13:01.165 "data_offset": 0, 00:13:01.165 "data_size": 0 00:13:01.165 }, 00:13:01.165 { 00:13:01.165 "name": "BaseBdev3", 00:13:01.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.165 "is_configured": false, 00:13:01.165 "data_offset": 0, 00:13:01.165 "data_size": 0 00:13:01.165 }, 00:13:01.165 { 00:13:01.165 "name": "BaseBdev4", 00:13:01.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.165 "is_configured": false, 00:13:01.165 "data_offset": 0, 00:13:01.165 "data_size": 0 00:13:01.165 } 00:13:01.165 ] 00:13:01.165 }' 00:13:01.165 16:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.165 16:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.734 16:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:01.734 16:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.734 16:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.734 [2024-12-06 16:28:43.350844] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:01.734 [2024-12-06 16:28:43.350887] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:13:01.734 16:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.734 16:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:01.734 16:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.734 16:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.734 [2024-12-06 16:28:43.362853] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:01.734 [2024-12-06 16:28:43.362897] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:01.734 [2024-12-06 16:28:43.362906] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:01.734 [2024-12-06 16:28:43.362915] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:01.734 [2024-12-06 16:28:43.362921] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:01.734 [2024-12-06 16:28:43.362930] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:01.734 [2024-12-06 16:28:43.362936] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:01.734 [2024-12-06 16:28:43.362945] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:01.734 16:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.734 16:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:01.734 16:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.734 16:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.734 [2024-12-06 16:28:43.383838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:01.734 BaseBdev1 00:13:01.734 16:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.734 16:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:01.734 16:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:01.734 16:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:01.734 16:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:01.734 16:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:01.734 16:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:01.734 16:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:01.734 16:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.734 16:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.734 16:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.734 16:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:01.734 16:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.734 16:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.734 [ 00:13:01.734 { 00:13:01.734 "name": "BaseBdev1", 00:13:01.734 "aliases": [ 00:13:01.734 "6c2669cf-3856-4bb0-bbfc-fbf89c0c3292" 00:13:01.734 ], 00:13:01.734 "product_name": "Malloc disk", 00:13:01.734 "block_size": 512, 00:13:01.734 "num_blocks": 65536, 00:13:01.734 "uuid": "6c2669cf-3856-4bb0-bbfc-fbf89c0c3292", 00:13:01.734 "assigned_rate_limits": { 00:13:01.734 "rw_ios_per_sec": 0, 00:13:01.734 "rw_mbytes_per_sec": 0, 00:13:01.734 "r_mbytes_per_sec": 0, 00:13:01.734 "w_mbytes_per_sec": 0 00:13:01.734 }, 00:13:01.735 "claimed": true, 00:13:01.735 "claim_type": "exclusive_write", 00:13:01.735 "zoned": false, 00:13:01.735 "supported_io_types": { 00:13:01.735 "read": true, 00:13:01.735 "write": true, 00:13:01.735 "unmap": true, 00:13:01.735 "flush": true, 00:13:01.735 "reset": true, 00:13:01.735 "nvme_admin": false, 00:13:01.735 "nvme_io": false, 00:13:01.735 "nvme_io_md": false, 00:13:01.735 "write_zeroes": true, 00:13:01.735 "zcopy": true, 00:13:01.735 "get_zone_info": false, 00:13:01.735 "zone_management": false, 00:13:01.735 "zone_append": false, 00:13:01.735 "compare": false, 00:13:01.735 "compare_and_write": false, 00:13:01.735 "abort": true, 00:13:01.735 "seek_hole": false, 00:13:01.735 "seek_data": false, 00:13:01.735 "copy": true, 00:13:01.735 "nvme_iov_md": false 00:13:01.735 }, 00:13:01.735 "memory_domains": [ 00:13:01.735 { 00:13:01.735 "dma_device_id": "system", 00:13:01.735 "dma_device_type": 1 00:13:01.735 }, 00:13:01.735 { 00:13:01.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.735 "dma_device_type": 2 00:13:01.735 } 00:13:01.735 ], 00:13:01.735 "driver_specific": {} 00:13:01.735 } 00:13:01.735 ] 00:13:01.735 16:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.735 16:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:01.735 16:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:01.735 16:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:01.735 16:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:01.735 16:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:01.735 16:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:01.735 16:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:01.735 16:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.735 16:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.735 16:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.735 16:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.735 16:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.735 16:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:01.735 16:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.735 16:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.735 16:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.735 16:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.735 "name": "Existed_Raid", 00:13:01.735 "uuid": "8b4f8f84-2c91-4695-aaf2-4b932efd4885", 00:13:01.735 "strip_size_kb": 64, 00:13:01.735 "state": "configuring", 00:13:01.735 "raid_level": "concat", 00:13:01.735 "superblock": true, 00:13:01.735 "num_base_bdevs": 4, 00:13:01.735 "num_base_bdevs_discovered": 1, 00:13:01.735 "num_base_bdevs_operational": 4, 00:13:01.735 "base_bdevs_list": [ 00:13:01.735 { 00:13:01.735 "name": "BaseBdev1", 00:13:01.735 "uuid": "6c2669cf-3856-4bb0-bbfc-fbf89c0c3292", 00:13:01.735 "is_configured": true, 00:13:01.735 "data_offset": 2048, 00:13:01.735 "data_size": 63488 00:13:01.735 }, 00:13:01.735 { 00:13:01.735 "name": "BaseBdev2", 00:13:01.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.735 "is_configured": false, 00:13:01.735 "data_offset": 0, 00:13:01.735 "data_size": 0 00:13:01.735 }, 00:13:01.735 { 00:13:01.735 "name": "BaseBdev3", 00:13:01.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.735 "is_configured": false, 00:13:01.735 "data_offset": 0, 00:13:01.735 "data_size": 0 00:13:01.735 }, 00:13:01.735 { 00:13:01.735 "name": "BaseBdev4", 00:13:01.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.735 "is_configured": false, 00:13:01.735 "data_offset": 0, 00:13:01.735 "data_size": 0 00:13:01.735 } 00:13:01.735 ] 00:13:01.735 }' 00:13:01.735 16:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.735 16:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.304 16:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:02.304 16:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.304 16:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.304 [2024-12-06 16:28:43.867163] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:02.304 [2024-12-06 16:28:43.867297] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:13:02.304 16:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.304 16:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:02.304 16:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.304 16:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.304 [2024-12-06 16:28:43.879172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:02.304 [2024-12-06 16:28:43.881322] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:02.304 [2024-12-06 16:28:43.881406] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:02.304 [2024-12-06 16:28:43.881439] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:02.304 [2024-12-06 16:28:43.881466] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:02.304 [2024-12-06 16:28:43.881487] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:02.304 [2024-12-06 16:28:43.881511] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:02.304 16:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.304 16:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:02.304 16:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:02.304 16:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:02.304 16:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:02.304 16:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:02.304 16:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:02.304 16:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:02.304 16:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:02.304 16:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.304 16:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.304 16:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.304 16:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.304 16:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:02.304 16:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.304 16:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.304 16:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.305 16:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.305 16:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.305 "name": "Existed_Raid", 00:13:02.305 "uuid": "4e09e8da-baed-46ec-b979-8bf8b1396468", 00:13:02.305 "strip_size_kb": 64, 00:13:02.305 "state": "configuring", 00:13:02.305 "raid_level": "concat", 00:13:02.305 "superblock": true, 00:13:02.305 "num_base_bdevs": 4, 00:13:02.305 "num_base_bdevs_discovered": 1, 00:13:02.305 "num_base_bdevs_operational": 4, 00:13:02.305 "base_bdevs_list": [ 00:13:02.305 { 00:13:02.305 "name": "BaseBdev1", 00:13:02.305 "uuid": "6c2669cf-3856-4bb0-bbfc-fbf89c0c3292", 00:13:02.305 "is_configured": true, 00:13:02.305 "data_offset": 2048, 00:13:02.305 "data_size": 63488 00:13:02.305 }, 00:13:02.305 { 00:13:02.305 "name": "BaseBdev2", 00:13:02.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.305 "is_configured": false, 00:13:02.305 "data_offset": 0, 00:13:02.305 "data_size": 0 00:13:02.305 }, 00:13:02.305 { 00:13:02.305 "name": "BaseBdev3", 00:13:02.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.305 "is_configured": false, 00:13:02.305 "data_offset": 0, 00:13:02.305 "data_size": 0 00:13:02.305 }, 00:13:02.305 { 00:13:02.305 "name": "BaseBdev4", 00:13:02.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.305 "is_configured": false, 00:13:02.305 "data_offset": 0, 00:13:02.305 "data_size": 0 00:13:02.305 } 00:13:02.305 ] 00:13:02.305 }' 00:13:02.305 16:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.305 16:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.564 16:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:02.564 16:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.564 16:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.564 [2024-12-06 16:28:44.357495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:02.564 BaseBdev2 00:13:02.564 16:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.564 16:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:02.564 16:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:02.564 16:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:02.564 16:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:02.564 16:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:02.564 16:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:02.564 16:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:02.564 16:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.564 16:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.564 16:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.564 16:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:02.564 16:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.564 16:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.564 [ 00:13:02.564 { 00:13:02.564 "name": "BaseBdev2", 00:13:02.564 "aliases": [ 00:13:02.564 "97f88182-bb9b-4ba0-9ce8-43e32c266ecc" 00:13:02.564 ], 00:13:02.564 "product_name": "Malloc disk", 00:13:02.564 "block_size": 512, 00:13:02.564 "num_blocks": 65536, 00:13:02.564 "uuid": "97f88182-bb9b-4ba0-9ce8-43e32c266ecc", 00:13:02.564 "assigned_rate_limits": { 00:13:02.564 "rw_ios_per_sec": 0, 00:13:02.564 "rw_mbytes_per_sec": 0, 00:13:02.564 "r_mbytes_per_sec": 0, 00:13:02.564 "w_mbytes_per_sec": 0 00:13:02.564 }, 00:13:02.564 "claimed": true, 00:13:02.564 "claim_type": "exclusive_write", 00:13:02.564 "zoned": false, 00:13:02.564 "supported_io_types": { 00:13:02.564 "read": true, 00:13:02.564 "write": true, 00:13:02.564 "unmap": true, 00:13:02.564 "flush": true, 00:13:02.564 "reset": true, 00:13:02.564 "nvme_admin": false, 00:13:02.564 "nvme_io": false, 00:13:02.564 "nvme_io_md": false, 00:13:02.564 "write_zeroes": true, 00:13:02.564 "zcopy": true, 00:13:02.564 "get_zone_info": false, 00:13:02.564 "zone_management": false, 00:13:02.564 "zone_append": false, 00:13:02.564 "compare": false, 00:13:02.564 "compare_and_write": false, 00:13:02.565 "abort": true, 00:13:02.565 "seek_hole": false, 00:13:02.565 "seek_data": false, 00:13:02.565 "copy": true, 00:13:02.565 "nvme_iov_md": false 00:13:02.565 }, 00:13:02.565 "memory_domains": [ 00:13:02.565 { 00:13:02.565 "dma_device_id": "system", 00:13:02.565 "dma_device_type": 1 00:13:02.565 }, 00:13:02.565 { 00:13:02.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.565 "dma_device_type": 2 00:13:02.565 } 00:13:02.565 ], 00:13:02.565 "driver_specific": {} 00:13:02.565 } 00:13:02.565 ] 00:13:02.565 16:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.565 16:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:02.565 16:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:02.565 16:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:02.565 16:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:02.565 16:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:02.565 16:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:02.565 16:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:02.565 16:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:02.565 16:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:02.565 16:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.565 16:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.565 16:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.565 16:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.565 16:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.565 16:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:02.565 16:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.565 16:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.823 16:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.823 16:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.823 "name": "Existed_Raid", 00:13:02.823 "uuid": "4e09e8da-baed-46ec-b979-8bf8b1396468", 00:13:02.823 "strip_size_kb": 64, 00:13:02.823 "state": "configuring", 00:13:02.823 "raid_level": "concat", 00:13:02.823 "superblock": true, 00:13:02.823 "num_base_bdevs": 4, 00:13:02.823 "num_base_bdevs_discovered": 2, 00:13:02.823 "num_base_bdevs_operational": 4, 00:13:02.823 "base_bdevs_list": [ 00:13:02.823 { 00:13:02.823 "name": "BaseBdev1", 00:13:02.823 "uuid": "6c2669cf-3856-4bb0-bbfc-fbf89c0c3292", 00:13:02.823 "is_configured": true, 00:13:02.823 "data_offset": 2048, 00:13:02.823 "data_size": 63488 00:13:02.823 }, 00:13:02.823 { 00:13:02.823 "name": "BaseBdev2", 00:13:02.823 "uuid": "97f88182-bb9b-4ba0-9ce8-43e32c266ecc", 00:13:02.823 "is_configured": true, 00:13:02.823 "data_offset": 2048, 00:13:02.823 "data_size": 63488 00:13:02.823 }, 00:13:02.823 { 00:13:02.823 "name": "BaseBdev3", 00:13:02.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.823 "is_configured": false, 00:13:02.823 "data_offset": 0, 00:13:02.823 "data_size": 0 00:13:02.823 }, 00:13:02.823 { 00:13:02.823 "name": "BaseBdev4", 00:13:02.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.823 "is_configured": false, 00:13:02.823 "data_offset": 0, 00:13:02.823 "data_size": 0 00:13:02.823 } 00:13:02.823 ] 00:13:02.823 }' 00:13:02.823 16:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.823 16:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.083 16:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:03.083 16:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.083 16:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.083 [2024-12-06 16:28:44.852313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:03.083 BaseBdev3 00:13:03.083 16:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.083 16:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:03.083 16:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:03.083 16:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:03.083 16:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:03.083 16:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:03.083 16:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:03.083 16:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:03.083 16:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.083 16:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.083 16:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.083 16:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:03.083 16:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.083 16:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.083 [ 00:13:03.083 { 00:13:03.083 "name": "BaseBdev3", 00:13:03.083 "aliases": [ 00:13:03.083 "a5414e94-e0d9-43c6-9865-8d502e799a96" 00:13:03.083 ], 00:13:03.083 "product_name": "Malloc disk", 00:13:03.083 "block_size": 512, 00:13:03.083 "num_blocks": 65536, 00:13:03.083 "uuid": "a5414e94-e0d9-43c6-9865-8d502e799a96", 00:13:03.083 "assigned_rate_limits": { 00:13:03.083 "rw_ios_per_sec": 0, 00:13:03.083 "rw_mbytes_per_sec": 0, 00:13:03.083 "r_mbytes_per_sec": 0, 00:13:03.083 "w_mbytes_per_sec": 0 00:13:03.083 }, 00:13:03.083 "claimed": true, 00:13:03.083 "claim_type": "exclusive_write", 00:13:03.083 "zoned": false, 00:13:03.083 "supported_io_types": { 00:13:03.083 "read": true, 00:13:03.083 "write": true, 00:13:03.083 "unmap": true, 00:13:03.083 "flush": true, 00:13:03.083 "reset": true, 00:13:03.083 "nvme_admin": false, 00:13:03.083 "nvme_io": false, 00:13:03.083 "nvme_io_md": false, 00:13:03.083 "write_zeroes": true, 00:13:03.083 "zcopy": true, 00:13:03.083 "get_zone_info": false, 00:13:03.083 "zone_management": false, 00:13:03.083 "zone_append": false, 00:13:03.083 "compare": false, 00:13:03.083 "compare_and_write": false, 00:13:03.083 "abort": true, 00:13:03.083 "seek_hole": false, 00:13:03.083 "seek_data": false, 00:13:03.083 "copy": true, 00:13:03.083 "nvme_iov_md": false 00:13:03.083 }, 00:13:03.083 "memory_domains": [ 00:13:03.083 { 00:13:03.083 "dma_device_id": "system", 00:13:03.083 "dma_device_type": 1 00:13:03.083 }, 00:13:03.083 { 00:13:03.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.083 "dma_device_type": 2 00:13:03.083 } 00:13:03.083 ], 00:13:03.083 "driver_specific": {} 00:13:03.083 } 00:13:03.083 ] 00:13:03.083 16:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.083 16:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:03.083 16:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:03.084 16:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:03.084 16:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:03.084 16:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:03.084 16:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:03.084 16:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:03.084 16:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:03.084 16:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:03.084 16:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.084 16:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.084 16:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.084 16:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.084 16:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.084 16:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:03.084 16:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.084 16:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.084 16:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.342 16:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.342 "name": "Existed_Raid", 00:13:03.342 "uuid": "4e09e8da-baed-46ec-b979-8bf8b1396468", 00:13:03.342 "strip_size_kb": 64, 00:13:03.342 "state": "configuring", 00:13:03.342 "raid_level": "concat", 00:13:03.342 "superblock": true, 00:13:03.342 "num_base_bdevs": 4, 00:13:03.342 "num_base_bdevs_discovered": 3, 00:13:03.342 "num_base_bdevs_operational": 4, 00:13:03.342 "base_bdevs_list": [ 00:13:03.342 { 00:13:03.342 "name": "BaseBdev1", 00:13:03.342 "uuid": "6c2669cf-3856-4bb0-bbfc-fbf89c0c3292", 00:13:03.342 "is_configured": true, 00:13:03.342 "data_offset": 2048, 00:13:03.342 "data_size": 63488 00:13:03.342 }, 00:13:03.342 { 00:13:03.342 "name": "BaseBdev2", 00:13:03.342 "uuid": "97f88182-bb9b-4ba0-9ce8-43e32c266ecc", 00:13:03.342 "is_configured": true, 00:13:03.342 "data_offset": 2048, 00:13:03.342 "data_size": 63488 00:13:03.342 }, 00:13:03.342 { 00:13:03.342 "name": "BaseBdev3", 00:13:03.342 "uuid": "a5414e94-e0d9-43c6-9865-8d502e799a96", 00:13:03.342 "is_configured": true, 00:13:03.342 "data_offset": 2048, 00:13:03.342 "data_size": 63488 00:13:03.342 }, 00:13:03.342 { 00:13:03.342 "name": "BaseBdev4", 00:13:03.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.342 "is_configured": false, 00:13:03.342 "data_offset": 0, 00:13:03.342 "data_size": 0 00:13:03.342 } 00:13:03.342 ] 00:13:03.342 }' 00:13:03.342 16:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.342 16:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.601 16:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:03.601 16:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.601 16:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.601 [2024-12-06 16:28:45.330775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:03.601 [2024-12-06 16:28:45.330992] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:13:03.601 [2024-12-06 16:28:45.331008] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:03.601 BaseBdev4 00:13:03.601 [2024-12-06 16:28:45.331291] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:03.601 [2024-12-06 16:28:45.331428] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:13:03.601 [2024-12-06 16:28:45.331441] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:13:03.601 [2024-12-06 16:28:45.331596] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:03.601 16:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.601 16:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:03.601 16:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:03.601 16:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:03.601 16:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:03.601 16:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:03.602 16:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:03.602 16:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:03.602 16:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.602 16:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.602 16:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.602 16:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:03.602 16:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.602 16:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.602 [ 00:13:03.602 { 00:13:03.602 "name": "BaseBdev4", 00:13:03.602 "aliases": [ 00:13:03.602 "3b667634-d6a4-4d16-aba2-cc99823706d6" 00:13:03.602 ], 00:13:03.602 "product_name": "Malloc disk", 00:13:03.602 "block_size": 512, 00:13:03.602 "num_blocks": 65536, 00:13:03.602 "uuid": "3b667634-d6a4-4d16-aba2-cc99823706d6", 00:13:03.602 "assigned_rate_limits": { 00:13:03.602 "rw_ios_per_sec": 0, 00:13:03.602 "rw_mbytes_per_sec": 0, 00:13:03.602 "r_mbytes_per_sec": 0, 00:13:03.602 "w_mbytes_per_sec": 0 00:13:03.602 }, 00:13:03.602 "claimed": true, 00:13:03.602 "claim_type": "exclusive_write", 00:13:03.602 "zoned": false, 00:13:03.602 "supported_io_types": { 00:13:03.602 "read": true, 00:13:03.602 "write": true, 00:13:03.602 "unmap": true, 00:13:03.602 "flush": true, 00:13:03.602 "reset": true, 00:13:03.602 "nvme_admin": false, 00:13:03.602 "nvme_io": false, 00:13:03.602 "nvme_io_md": false, 00:13:03.602 "write_zeroes": true, 00:13:03.602 "zcopy": true, 00:13:03.602 "get_zone_info": false, 00:13:03.602 "zone_management": false, 00:13:03.602 "zone_append": false, 00:13:03.602 "compare": false, 00:13:03.602 "compare_and_write": false, 00:13:03.602 "abort": true, 00:13:03.602 "seek_hole": false, 00:13:03.602 "seek_data": false, 00:13:03.602 "copy": true, 00:13:03.602 "nvme_iov_md": false 00:13:03.602 }, 00:13:03.602 "memory_domains": [ 00:13:03.602 { 00:13:03.602 "dma_device_id": "system", 00:13:03.602 "dma_device_type": 1 00:13:03.602 }, 00:13:03.602 { 00:13:03.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.602 "dma_device_type": 2 00:13:03.602 } 00:13:03.602 ], 00:13:03.602 "driver_specific": {} 00:13:03.602 } 00:13:03.602 ] 00:13:03.602 16:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.602 16:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:03.602 16:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:03.602 16:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:03.602 16:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:13:03.602 16:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:03.602 16:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:03.602 16:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:03.602 16:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:03.602 16:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:03.602 16:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.602 16:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.602 16:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.602 16:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.602 16:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.602 16:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:03.602 16:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.602 16:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.602 16:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.602 16:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.602 "name": "Existed_Raid", 00:13:03.602 "uuid": "4e09e8da-baed-46ec-b979-8bf8b1396468", 00:13:03.602 "strip_size_kb": 64, 00:13:03.602 "state": "online", 00:13:03.602 "raid_level": "concat", 00:13:03.602 "superblock": true, 00:13:03.602 "num_base_bdevs": 4, 00:13:03.602 "num_base_bdevs_discovered": 4, 00:13:03.602 "num_base_bdevs_operational": 4, 00:13:03.602 "base_bdevs_list": [ 00:13:03.602 { 00:13:03.602 "name": "BaseBdev1", 00:13:03.602 "uuid": "6c2669cf-3856-4bb0-bbfc-fbf89c0c3292", 00:13:03.602 "is_configured": true, 00:13:03.602 "data_offset": 2048, 00:13:03.602 "data_size": 63488 00:13:03.602 }, 00:13:03.602 { 00:13:03.602 "name": "BaseBdev2", 00:13:03.602 "uuid": "97f88182-bb9b-4ba0-9ce8-43e32c266ecc", 00:13:03.602 "is_configured": true, 00:13:03.602 "data_offset": 2048, 00:13:03.602 "data_size": 63488 00:13:03.602 }, 00:13:03.602 { 00:13:03.602 "name": "BaseBdev3", 00:13:03.602 "uuid": "a5414e94-e0d9-43c6-9865-8d502e799a96", 00:13:03.602 "is_configured": true, 00:13:03.602 "data_offset": 2048, 00:13:03.602 "data_size": 63488 00:13:03.602 }, 00:13:03.602 { 00:13:03.602 "name": "BaseBdev4", 00:13:03.602 "uuid": "3b667634-d6a4-4d16-aba2-cc99823706d6", 00:13:03.602 "is_configured": true, 00:13:03.602 "data_offset": 2048, 00:13:03.602 "data_size": 63488 00:13:03.602 } 00:13:03.602 ] 00:13:03.602 }' 00:13:03.602 16:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.602 16:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.169 16:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:04.169 16:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:04.169 16:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:04.169 16:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:04.169 16:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:04.169 16:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:04.169 16:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:04.169 16:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.169 16:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.169 16:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:04.169 [2024-12-06 16:28:45.842493] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:04.169 16:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.169 16:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:04.169 "name": "Existed_Raid", 00:13:04.169 "aliases": [ 00:13:04.169 "4e09e8da-baed-46ec-b979-8bf8b1396468" 00:13:04.169 ], 00:13:04.169 "product_name": "Raid Volume", 00:13:04.169 "block_size": 512, 00:13:04.169 "num_blocks": 253952, 00:13:04.169 "uuid": "4e09e8da-baed-46ec-b979-8bf8b1396468", 00:13:04.169 "assigned_rate_limits": { 00:13:04.169 "rw_ios_per_sec": 0, 00:13:04.169 "rw_mbytes_per_sec": 0, 00:13:04.169 "r_mbytes_per_sec": 0, 00:13:04.169 "w_mbytes_per_sec": 0 00:13:04.169 }, 00:13:04.169 "claimed": false, 00:13:04.169 "zoned": false, 00:13:04.169 "supported_io_types": { 00:13:04.169 "read": true, 00:13:04.169 "write": true, 00:13:04.169 "unmap": true, 00:13:04.169 "flush": true, 00:13:04.169 "reset": true, 00:13:04.169 "nvme_admin": false, 00:13:04.169 "nvme_io": false, 00:13:04.169 "nvme_io_md": false, 00:13:04.169 "write_zeroes": true, 00:13:04.169 "zcopy": false, 00:13:04.169 "get_zone_info": false, 00:13:04.169 "zone_management": false, 00:13:04.169 "zone_append": false, 00:13:04.169 "compare": false, 00:13:04.169 "compare_and_write": false, 00:13:04.169 "abort": false, 00:13:04.169 "seek_hole": false, 00:13:04.169 "seek_data": false, 00:13:04.169 "copy": false, 00:13:04.169 "nvme_iov_md": false 00:13:04.169 }, 00:13:04.169 "memory_domains": [ 00:13:04.169 { 00:13:04.169 "dma_device_id": "system", 00:13:04.169 "dma_device_type": 1 00:13:04.169 }, 00:13:04.169 { 00:13:04.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.169 "dma_device_type": 2 00:13:04.169 }, 00:13:04.169 { 00:13:04.169 "dma_device_id": "system", 00:13:04.169 "dma_device_type": 1 00:13:04.169 }, 00:13:04.169 { 00:13:04.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.169 "dma_device_type": 2 00:13:04.169 }, 00:13:04.169 { 00:13:04.169 "dma_device_id": "system", 00:13:04.169 "dma_device_type": 1 00:13:04.169 }, 00:13:04.169 { 00:13:04.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.169 "dma_device_type": 2 00:13:04.169 }, 00:13:04.169 { 00:13:04.169 "dma_device_id": "system", 00:13:04.169 "dma_device_type": 1 00:13:04.169 }, 00:13:04.169 { 00:13:04.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.169 "dma_device_type": 2 00:13:04.169 } 00:13:04.169 ], 00:13:04.169 "driver_specific": { 00:13:04.169 "raid": { 00:13:04.169 "uuid": "4e09e8da-baed-46ec-b979-8bf8b1396468", 00:13:04.169 "strip_size_kb": 64, 00:13:04.169 "state": "online", 00:13:04.169 "raid_level": "concat", 00:13:04.169 "superblock": true, 00:13:04.169 "num_base_bdevs": 4, 00:13:04.169 "num_base_bdevs_discovered": 4, 00:13:04.169 "num_base_bdevs_operational": 4, 00:13:04.169 "base_bdevs_list": [ 00:13:04.169 { 00:13:04.169 "name": "BaseBdev1", 00:13:04.169 "uuid": "6c2669cf-3856-4bb0-bbfc-fbf89c0c3292", 00:13:04.169 "is_configured": true, 00:13:04.169 "data_offset": 2048, 00:13:04.169 "data_size": 63488 00:13:04.169 }, 00:13:04.169 { 00:13:04.169 "name": "BaseBdev2", 00:13:04.169 "uuid": "97f88182-bb9b-4ba0-9ce8-43e32c266ecc", 00:13:04.169 "is_configured": true, 00:13:04.169 "data_offset": 2048, 00:13:04.169 "data_size": 63488 00:13:04.169 }, 00:13:04.169 { 00:13:04.169 "name": "BaseBdev3", 00:13:04.170 "uuid": "a5414e94-e0d9-43c6-9865-8d502e799a96", 00:13:04.170 "is_configured": true, 00:13:04.170 "data_offset": 2048, 00:13:04.170 "data_size": 63488 00:13:04.170 }, 00:13:04.170 { 00:13:04.170 "name": "BaseBdev4", 00:13:04.170 "uuid": "3b667634-d6a4-4d16-aba2-cc99823706d6", 00:13:04.170 "is_configured": true, 00:13:04.170 "data_offset": 2048, 00:13:04.170 "data_size": 63488 00:13:04.170 } 00:13:04.170 ] 00:13:04.170 } 00:13:04.170 } 00:13:04.170 }' 00:13:04.170 16:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:04.170 16:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:04.170 BaseBdev2 00:13:04.170 BaseBdev3 00:13:04.170 BaseBdev4' 00:13:04.170 16:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:04.170 16:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:04.170 16:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:04.170 16:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:04.170 16:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:04.170 16:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.170 16:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.170 16:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.429 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:04.429 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:04.429 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:04.429 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:04.429 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.429 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.429 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:04.429 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.429 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:04.429 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:04.429 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:04.429 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:04.429 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:04.429 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.429 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.429 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.429 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:04.429 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:04.429 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:04.429 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:04.429 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.429 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.429 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:04.429 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.429 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:04.429 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:04.429 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:04.429 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.429 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.429 [2024-12-06 16:28:46.149558] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:04.429 [2024-12-06 16:28:46.149592] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:04.429 [2024-12-06 16:28:46.149668] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:04.429 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.429 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:04.429 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:13:04.429 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:04.429 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:13:04.429 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:04.429 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:13:04.429 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:04.429 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:04.429 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:04.429 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:04.429 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:04.429 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.429 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.429 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.429 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.429 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:04.429 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.429 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.429 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.429 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.429 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.429 "name": "Existed_Raid", 00:13:04.429 "uuid": "4e09e8da-baed-46ec-b979-8bf8b1396468", 00:13:04.429 "strip_size_kb": 64, 00:13:04.429 "state": "offline", 00:13:04.429 "raid_level": "concat", 00:13:04.429 "superblock": true, 00:13:04.429 "num_base_bdevs": 4, 00:13:04.430 "num_base_bdevs_discovered": 3, 00:13:04.430 "num_base_bdevs_operational": 3, 00:13:04.430 "base_bdevs_list": [ 00:13:04.430 { 00:13:04.430 "name": null, 00:13:04.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.430 "is_configured": false, 00:13:04.430 "data_offset": 0, 00:13:04.430 "data_size": 63488 00:13:04.430 }, 00:13:04.430 { 00:13:04.430 "name": "BaseBdev2", 00:13:04.430 "uuid": "97f88182-bb9b-4ba0-9ce8-43e32c266ecc", 00:13:04.430 "is_configured": true, 00:13:04.430 "data_offset": 2048, 00:13:04.430 "data_size": 63488 00:13:04.430 }, 00:13:04.430 { 00:13:04.430 "name": "BaseBdev3", 00:13:04.430 "uuid": "a5414e94-e0d9-43c6-9865-8d502e799a96", 00:13:04.430 "is_configured": true, 00:13:04.430 "data_offset": 2048, 00:13:04.430 "data_size": 63488 00:13:04.430 }, 00:13:04.430 { 00:13:04.430 "name": "BaseBdev4", 00:13:04.430 "uuid": "3b667634-d6a4-4d16-aba2-cc99823706d6", 00:13:04.430 "is_configured": true, 00:13:04.430 "data_offset": 2048, 00:13:04.430 "data_size": 63488 00:13:04.430 } 00:13:04.430 ] 00:13:04.430 }' 00:13:04.430 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.430 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.998 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:04.998 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:04.998 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.998 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.998 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:04.998 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.998 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.998 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:04.998 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:04.998 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:04.998 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.998 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.998 [2024-12-06 16:28:46.628515] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:04.998 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.998 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:04.998 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:04.998 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.998 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:04.998 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.998 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.998 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.998 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:04.998 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:04.998 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:04.998 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.998 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.998 [2024-12-06 16:28:46.704070] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:04.998 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.998 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:04.998 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:04.998 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.998 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.998 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.998 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:04.998 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.998 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:04.998 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:04.998 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:04.999 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.999 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.999 [2024-12-06 16:28:46.771723] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:04.999 [2024-12-06 16:28:46.771851] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:13:04.999 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.999 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:04.999 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:04.999 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.999 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:04.999 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.999 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.999 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.999 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:04.999 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:04.999 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:04.999 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:04.999 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:04.999 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:04.999 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.999 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.258 BaseBdev2 00:13:05.258 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.258 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:05.258 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:05.258 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:05.258 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:05.258 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:05.258 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:05.258 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:05.258 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.258 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.258 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.258 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:05.258 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.258 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.258 [ 00:13:05.258 { 00:13:05.258 "name": "BaseBdev2", 00:13:05.258 "aliases": [ 00:13:05.258 "1a455241-60f7-4220-b81c-132d8f0a34d5" 00:13:05.258 ], 00:13:05.258 "product_name": "Malloc disk", 00:13:05.258 "block_size": 512, 00:13:05.258 "num_blocks": 65536, 00:13:05.258 "uuid": "1a455241-60f7-4220-b81c-132d8f0a34d5", 00:13:05.258 "assigned_rate_limits": { 00:13:05.258 "rw_ios_per_sec": 0, 00:13:05.258 "rw_mbytes_per_sec": 0, 00:13:05.258 "r_mbytes_per_sec": 0, 00:13:05.258 "w_mbytes_per_sec": 0 00:13:05.258 }, 00:13:05.258 "claimed": false, 00:13:05.258 "zoned": false, 00:13:05.258 "supported_io_types": { 00:13:05.258 "read": true, 00:13:05.258 "write": true, 00:13:05.258 "unmap": true, 00:13:05.258 "flush": true, 00:13:05.258 "reset": true, 00:13:05.258 "nvme_admin": false, 00:13:05.258 "nvme_io": false, 00:13:05.258 "nvme_io_md": false, 00:13:05.258 "write_zeroes": true, 00:13:05.258 "zcopy": true, 00:13:05.258 "get_zone_info": false, 00:13:05.258 "zone_management": false, 00:13:05.258 "zone_append": false, 00:13:05.258 "compare": false, 00:13:05.258 "compare_and_write": false, 00:13:05.258 "abort": true, 00:13:05.258 "seek_hole": false, 00:13:05.258 "seek_data": false, 00:13:05.258 "copy": true, 00:13:05.258 "nvme_iov_md": false 00:13:05.258 }, 00:13:05.258 "memory_domains": [ 00:13:05.258 { 00:13:05.258 "dma_device_id": "system", 00:13:05.258 "dma_device_type": 1 00:13:05.258 }, 00:13:05.258 { 00:13:05.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:05.258 "dma_device_type": 2 00:13:05.258 } 00:13:05.258 ], 00:13:05.258 "driver_specific": {} 00:13:05.258 } 00:13:05.258 ] 00:13:05.258 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.258 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:05.258 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:05.258 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:05.258 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:05.258 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.258 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.258 BaseBdev3 00:13:05.258 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.258 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:05.258 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:05.258 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:05.258 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:05.258 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:05.258 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:05.258 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:05.258 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.258 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.258 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.258 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:05.258 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.258 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.258 [ 00:13:05.258 { 00:13:05.258 "name": "BaseBdev3", 00:13:05.258 "aliases": [ 00:13:05.258 "0a23aff7-85e9-405d-90bf-ff7dfce00f9f" 00:13:05.258 ], 00:13:05.258 "product_name": "Malloc disk", 00:13:05.258 "block_size": 512, 00:13:05.258 "num_blocks": 65536, 00:13:05.258 "uuid": "0a23aff7-85e9-405d-90bf-ff7dfce00f9f", 00:13:05.258 "assigned_rate_limits": { 00:13:05.258 "rw_ios_per_sec": 0, 00:13:05.258 "rw_mbytes_per_sec": 0, 00:13:05.258 "r_mbytes_per_sec": 0, 00:13:05.258 "w_mbytes_per_sec": 0 00:13:05.258 }, 00:13:05.258 "claimed": false, 00:13:05.258 "zoned": false, 00:13:05.258 "supported_io_types": { 00:13:05.258 "read": true, 00:13:05.258 "write": true, 00:13:05.258 "unmap": true, 00:13:05.258 "flush": true, 00:13:05.258 "reset": true, 00:13:05.258 "nvme_admin": false, 00:13:05.258 "nvme_io": false, 00:13:05.258 "nvme_io_md": false, 00:13:05.258 "write_zeroes": true, 00:13:05.258 "zcopy": true, 00:13:05.258 "get_zone_info": false, 00:13:05.258 "zone_management": false, 00:13:05.258 "zone_append": false, 00:13:05.258 "compare": false, 00:13:05.258 "compare_and_write": false, 00:13:05.258 "abort": true, 00:13:05.258 "seek_hole": false, 00:13:05.258 "seek_data": false, 00:13:05.258 "copy": true, 00:13:05.258 "nvme_iov_md": false 00:13:05.258 }, 00:13:05.258 "memory_domains": [ 00:13:05.258 { 00:13:05.258 "dma_device_id": "system", 00:13:05.258 "dma_device_type": 1 00:13:05.258 }, 00:13:05.258 { 00:13:05.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:05.258 "dma_device_type": 2 00:13:05.258 } 00:13:05.258 ], 00:13:05.258 "driver_specific": {} 00:13:05.258 } 00:13:05.258 ] 00:13:05.258 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.258 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:05.258 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:05.258 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:05.258 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:05.258 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.258 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.258 BaseBdev4 00:13:05.258 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.258 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:05.258 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:05.258 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:05.258 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:05.259 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:05.259 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:05.259 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:05.259 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.259 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.259 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.259 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:05.259 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.259 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.259 [ 00:13:05.259 { 00:13:05.259 "name": "BaseBdev4", 00:13:05.259 "aliases": [ 00:13:05.259 "b6f74948-48cf-4fb7-a71f-606f3f8909f8" 00:13:05.259 ], 00:13:05.259 "product_name": "Malloc disk", 00:13:05.259 "block_size": 512, 00:13:05.259 "num_blocks": 65536, 00:13:05.259 "uuid": "b6f74948-48cf-4fb7-a71f-606f3f8909f8", 00:13:05.259 "assigned_rate_limits": { 00:13:05.259 "rw_ios_per_sec": 0, 00:13:05.259 "rw_mbytes_per_sec": 0, 00:13:05.259 "r_mbytes_per_sec": 0, 00:13:05.259 "w_mbytes_per_sec": 0 00:13:05.259 }, 00:13:05.259 "claimed": false, 00:13:05.259 "zoned": false, 00:13:05.259 "supported_io_types": { 00:13:05.259 "read": true, 00:13:05.259 "write": true, 00:13:05.259 "unmap": true, 00:13:05.259 "flush": true, 00:13:05.259 "reset": true, 00:13:05.259 "nvme_admin": false, 00:13:05.259 "nvme_io": false, 00:13:05.259 "nvme_io_md": false, 00:13:05.259 "write_zeroes": true, 00:13:05.259 "zcopy": true, 00:13:05.259 "get_zone_info": false, 00:13:05.259 "zone_management": false, 00:13:05.259 "zone_append": false, 00:13:05.259 "compare": false, 00:13:05.259 "compare_and_write": false, 00:13:05.259 "abort": true, 00:13:05.259 "seek_hole": false, 00:13:05.259 "seek_data": false, 00:13:05.259 "copy": true, 00:13:05.259 "nvme_iov_md": false 00:13:05.259 }, 00:13:05.259 "memory_domains": [ 00:13:05.259 { 00:13:05.259 "dma_device_id": "system", 00:13:05.259 "dma_device_type": 1 00:13:05.259 }, 00:13:05.259 { 00:13:05.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:05.259 "dma_device_type": 2 00:13:05.259 } 00:13:05.259 ], 00:13:05.259 "driver_specific": {} 00:13:05.259 } 00:13:05.259 ] 00:13:05.259 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.259 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:05.259 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:05.259 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:05.259 16:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:05.259 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.259 16:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.259 [2024-12-06 16:28:47.002851] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:05.259 [2024-12-06 16:28:47.002937] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:05.259 [2024-12-06 16:28:47.003022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:05.259 [2024-12-06 16:28:47.005275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:05.259 [2024-12-06 16:28:47.005374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:05.259 16:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.259 16:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:05.259 16:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:05.259 16:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:05.259 16:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:05.259 16:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:05.259 16:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:05.259 16:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.259 16:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.259 16:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.259 16:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.259 16:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.259 16:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.259 16:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:05.259 16:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.259 16:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.259 16:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.259 "name": "Existed_Raid", 00:13:05.259 "uuid": "181a7ee6-b377-40f9-9307-a60147818711", 00:13:05.259 "strip_size_kb": 64, 00:13:05.259 "state": "configuring", 00:13:05.259 "raid_level": "concat", 00:13:05.259 "superblock": true, 00:13:05.259 "num_base_bdevs": 4, 00:13:05.259 "num_base_bdevs_discovered": 3, 00:13:05.259 "num_base_bdevs_operational": 4, 00:13:05.259 "base_bdevs_list": [ 00:13:05.259 { 00:13:05.259 "name": "BaseBdev1", 00:13:05.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.259 "is_configured": false, 00:13:05.259 "data_offset": 0, 00:13:05.259 "data_size": 0 00:13:05.259 }, 00:13:05.259 { 00:13:05.259 "name": "BaseBdev2", 00:13:05.259 "uuid": "1a455241-60f7-4220-b81c-132d8f0a34d5", 00:13:05.259 "is_configured": true, 00:13:05.259 "data_offset": 2048, 00:13:05.259 "data_size": 63488 00:13:05.259 }, 00:13:05.259 { 00:13:05.259 "name": "BaseBdev3", 00:13:05.259 "uuid": "0a23aff7-85e9-405d-90bf-ff7dfce00f9f", 00:13:05.259 "is_configured": true, 00:13:05.259 "data_offset": 2048, 00:13:05.259 "data_size": 63488 00:13:05.259 }, 00:13:05.259 { 00:13:05.259 "name": "BaseBdev4", 00:13:05.259 "uuid": "b6f74948-48cf-4fb7-a71f-606f3f8909f8", 00:13:05.259 "is_configured": true, 00:13:05.259 "data_offset": 2048, 00:13:05.259 "data_size": 63488 00:13:05.259 } 00:13:05.259 ] 00:13:05.259 }' 00:13:05.259 16:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.259 16:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.828 16:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:05.828 16:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.828 16:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.828 [2024-12-06 16:28:47.422117] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:05.828 16:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.828 16:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:05.828 16:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:05.828 16:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:05.828 16:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:05.828 16:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:05.828 16:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:05.828 16:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.828 16:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.828 16:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.828 16:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.828 16:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.828 16:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:05.828 16:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.828 16:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.828 16:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.828 16:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.828 "name": "Existed_Raid", 00:13:05.828 "uuid": "181a7ee6-b377-40f9-9307-a60147818711", 00:13:05.828 "strip_size_kb": 64, 00:13:05.828 "state": "configuring", 00:13:05.828 "raid_level": "concat", 00:13:05.828 "superblock": true, 00:13:05.828 "num_base_bdevs": 4, 00:13:05.828 "num_base_bdevs_discovered": 2, 00:13:05.828 "num_base_bdevs_operational": 4, 00:13:05.828 "base_bdevs_list": [ 00:13:05.828 { 00:13:05.828 "name": "BaseBdev1", 00:13:05.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.828 "is_configured": false, 00:13:05.828 "data_offset": 0, 00:13:05.828 "data_size": 0 00:13:05.828 }, 00:13:05.828 { 00:13:05.828 "name": null, 00:13:05.828 "uuid": "1a455241-60f7-4220-b81c-132d8f0a34d5", 00:13:05.828 "is_configured": false, 00:13:05.828 "data_offset": 0, 00:13:05.828 "data_size": 63488 00:13:05.828 }, 00:13:05.828 { 00:13:05.828 "name": "BaseBdev3", 00:13:05.828 "uuid": "0a23aff7-85e9-405d-90bf-ff7dfce00f9f", 00:13:05.828 "is_configured": true, 00:13:05.828 "data_offset": 2048, 00:13:05.828 "data_size": 63488 00:13:05.828 }, 00:13:05.828 { 00:13:05.828 "name": "BaseBdev4", 00:13:05.828 "uuid": "b6f74948-48cf-4fb7-a71f-606f3f8909f8", 00:13:05.828 "is_configured": true, 00:13:05.828 "data_offset": 2048, 00:13:05.828 "data_size": 63488 00:13:05.828 } 00:13:05.828 ] 00:13:05.828 }' 00:13:05.828 16:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.828 16:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.087 16:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.087 16:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:06.087 16:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.087 16:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.347 16:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.347 16:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:06.347 16:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:06.347 16:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.347 16:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.347 [2024-12-06 16:28:47.972359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:06.347 BaseBdev1 00:13:06.347 16:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.347 16:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:06.347 16:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:06.347 16:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:06.347 16:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:06.347 16:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:06.347 16:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:06.347 16:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:06.347 16:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.347 16:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.347 16:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.347 16:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:06.347 16:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.347 16:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.347 [ 00:13:06.347 { 00:13:06.347 "name": "BaseBdev1", 00:13:06.347 "aliases": [ 00:13:06.347 "6ad2bae1-3779-4c2e-ab98-17207277e272" 00:13:06.347 ], 00:13:06.347 "product_name": "Malloc disk", 00:13:06.347 "block_size": 512, 00:13:06.347 "num_blocks": 65536, 00:13:06.347 "uuid": "6ad2bae1-3779-4c2e-ab98-17207277e272", 00:13:06.347 "assigned_rate_limits": { 00:13:06.347 "rw_ios_per_sec": 0, 00:13:06.347 "rw_mbytes_per_sec": 0, 00:13:06.347 "r_mbytes_per_sec": 0, 00:13:06.347 "w_mbytes_per_sec": 0 00:13:06.347 }, 00:13:06.347 "claimed": true, 00:13:06.347 "claim_type": "exclusive_write", 00:13:06.347 "zoned": false, 00:13:06.347 "supported_io_types": { 00:13:06.347 "read": true, 00:13:06.347 "write": true, 00:13:06.347 "unmap": true, 00:13:06.347 "flush": true, 00:13:06.347 "reset": true, 00:13:06.347 "nvme_admin": false, 00:13:06.347 "nvme_io": false, 00:13:06.347 "nvme_io_md": false, 00:13:06.347 "write_zeroes": true, 00:13:06.347 "zcopy": true, 00:13:06.347 "get_zone_info": false, 00:13:06.347 "zone_management": false, 00:13:06.347 "zone_append": false, 00:13:06.347 "compare": false, 00:13:06.347 "compare_and_write": false, 00:13:06.347 "abort": true, 00:13:06.347 "seek_hole": false, 00:13:06.347 "seek_data": false, 00:13:06.347 "copy": true, 00:13:06.347 "nvme_iov_md": false 00:13:06.347 }, 00:13:06.347 "memory_domains": [ 00:13:06.347 { 00:13:06.347 "dma_device_id": "system", 00:13:06.347 "dma_device_type": 1 00:13:06.347 }, 00:13:06.347 { 00:13:06.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.347 "dma_device_type": 2 00:13:06.347 } 00:13:06.347 ], 00:13:06.347 "driver_specific": {} 00:13:06.347 } 00:13:06.347 ] 00:13:06.347 16:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.347 16:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:06.347 16:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:06.347 16:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:06.347 16:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:06.347 16:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:06.347 16:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:06.347 16:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:06.347 16:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.347 16:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.347 16:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.347 16:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.347 16:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.347 16:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:06.347 16:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.347 16:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.347 16:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.347 16:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.347 "name": "Existed_Raid", 00:13:06.347 "uuid": "181a7ee6-b377-40f9-9307-a60147818711", 00:13:06.347 "strip_size_kb": 64, 00:13:06.347 "state": "configuring", 00:13:06.347 "raid_level": "concat", 00:13:06.347 "superblock": true, 00:13:06.347 "num_base_bdevs": 4, 00:13:06.347 "num_base_bdevs_discovered": 3, 00:13:06.347 "num_base_bdevs_operational": 4, 00:13:06.347 "base_bdevs_list": [ 00:13:06.347 { 00:13:06.347 "name": "BaseBdev1", 00:13:06.347 "uuid": "6ad2bae1-3779-4c2e-ab98-17207277e272", 00:13:06.347 "is_configured": true, 00:13:06.347 "data_offset": 2048, 00:13:06.347 "data_size": 63488 00:13:06.347 }, 00:13:06.347 { 00:13:06.347 "name": null, 00:13:06.347 "uuid": "1a455241-60f7-4220-b81c-132d8f0a34d5", 00:13:06.347 "is_configured": false, 00:13:06.347 "data_offset": 0, 00:13:06.347 "data_size": 63488 00:13:06.347 }, 00:13:06.347 { 00:13:06.347 "name": "BaseBdev3", 00:13:06.347 "uuid": "0a23aff7-85e9-405d-90bf-ff7dfce00f9f", 00:13:06.347 "is_configured": true, 00:13:06.347 "data_offset": 2048, 00:13:06.347 "data_size": 63488 00:13:06.347 }, 00:13:06.347 { 00:13:06.347 "name": "BaseBdev4", 00:13:06.347 "uuid": "b6f74948-48cf-4fb7-a71f-606f3f8909f8", 00:13:06.347 "is_configured": true, 00:13:06.347 "data_offset": 2048, 00:13:06.347 "data_size": 63488 00:13:06.347 } 00:13:06.347 ] 00:13:06.347 }' 00:13:06.347 16:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.347 16:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.607 16:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.607 16:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.607 16:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.607 16:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:06.607 16:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.607 16:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:06.607 16:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:06.607 16:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.607 16:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.607 [2024-12-06 16:28:48.443715] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:06.866 16:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.866 16:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:06.866 16:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:06.866 16:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:06.866 16:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:06.866 16:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:06.866 16:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:06.866 16:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.866 16:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.866 16:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.866 16:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.866 16:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:06.866 16:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.866 16:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.866 16:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.866 16:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.866 16:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.866 "name": "Existed_Raid", 00:13:06.866 "uuid": "181a7ee6-b377-40f9-9307-a60147818711", 00:13:06.866 "strip_size_kb": 64, 00:13:06.866 "state": "configuring", 00:13:06.866 "raid_level": "concat", 00:13:06.866 "superblock": true, 00:13:06.866 "num_base_bdevs": 4, 00:13:06.866 "num_base_bdevs_discovered": 2, 00:13:06.866 "num_base_bdevs_operational": 4, 00:13:06.866 "base_bdevs_list": [ 00:13:06.866 { 00:13:06.866 "name": "BaseBdev1", 00:13:06.866 "uuid": "6ad2bae1-3779-4c2e-ab98-17207277e272", 00:13:06.866 "is_configured": true, 00:13:06.866 "data_offset": 2048, 00:13:06.866 "data_size": 63488 00:13:06.866 }, 00:13:06.866 { 00:13:06.866 "name": null, 00:13:06.866 "uuid": "1a455241-60f7-4220-b81c-132d8f0a34d5", 00:13:06.866 "is_configured": false, 00:13:06.866 "data_offset": 0, 00:13:06.866 "data_size": 63488 00:13:06.866 }, 00:13:06.866 { 00:13:06.866 "name": null, 00:13:06.866 "uuid": "0a23aff7-85e9-405d-90bf-ff7dfce00f9f", 00:13:06.866 "is_configured": false, 00:13:06.866 "data_offset": 0, 00:13:06.866 "data_size": 63488 00:13:06.866 }, 00:13:06.866 { 00:13:06.866 "name": "BaseBdev4", 00:13:06.866 "uuid": "b6f74948-48cf-4fb7-a71f-606f3f8909f8", 00:13:06.866 "is_configured": true, 00:13:06.866 "data_offset": 2048, 00:13:06.866 "data_size": 63488 00:13:06.866 } 00:13:06.866 ] 00:13:06.866 }' 00:13:06.866 16:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.866 16:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.185 16:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.185 16:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:07.185 16:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.185 16:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.185 16:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.185 16:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:07.185 16:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:07.185 16:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.185 16:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.185 [2024-12-06 16:28:48.935051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:07.185 16:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.185 16:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:07.185 16:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:07.185 16:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:07.185 16:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:07.185 16:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:07.185 16:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:07.186 16:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.186 16:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.186 16:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.186 16:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.186 16:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.186 16:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:07.186 16:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.186 16:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.186 16:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.485 16:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.485 "name": "Existed_Raid", 00:13:07.485 "uuid": "181a7ee6-b377-40f9-9307-a60147818711", 00:13:07.485 "strip_size_kb": 64, 00:13:07.485 "state": "configuring", 00:13:07.485 "raid_level": "concat", 00:13:07.485 "superblock": true, 00:13:07.485 "num_base_bdevs": 4, 00:13:07.485 "num_base_bdevs_discovered": 3, 00:13:07.485 "num_base_bdevs_operational": 4, 00:13:07.485 "base_bdevs_list": [ 00:13:07.485 { 00:13:07.485 "name": "BaseBdev1", 00:13:07.485 "uuid": "6ad2bae1-3779-4c2e-ab98-17207277e272", 00:13:07.485 "is_configured": true, 00:13:07.485 "data_offset": 2048, 00:13:07.485 "data_size": 63488 00:13:07.485 }, 00:13:07.485 { 00:13:07.485 "name": null, 00:13:07.485 "uuid": "1a455241-60f7-4220-b81c-132d8f0a34d5", 00:13:07.485 "is_configured": false, 00:13:07.485 "data_offset": 0, 00:13:07.485 "data_size": 63488 00:13:07.485 }, 00:13:07.485 { 00:13:07.485 "name": "BaseBdev3", 00:13:07.485 "uuid": "0a23aff7-85e9-405d-90bf-ff7dfce00f9f", 00:13:07.485 "is_configured": true, 00:13:07.485 "data_offset": 2048, 00:13:07.485 "data_size": 63488 00:13:07.485 }, 00:13:07.485 { 00:13:07.485 "name": "BaseBdev4", 00:13:07.485 "uuid": "b6f74948-48cf-4fb7-a71f-606f3f8909f8", 00:13:07.485 "is_configured": true, 00:13:07.485 "data_offset": 2048, 00:13:07.485 "data_size": 63488 00:13:07.485 } 00:13:07.485 ] 00:13:07.485 }' 00:13:07.485 16:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.485 16:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.744 16:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.744 16:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.744 16:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.744 16:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:07.744 16:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.744 16:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:07.744 16:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:07.744 16:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.744 16:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.744 [2024-12-06 16:28:49.478095] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:07.744 16:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.744 16:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:07.744 16:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:07.744 16:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:07.744 16:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:07.744 16:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:07.744 16:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:07.744 16:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.744 16:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.744 16:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.744 16:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.744 16:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.744 16:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.744 16:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.744 16:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:07.744 16:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.744 16:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.744 "name": "Existed_Raid", 00:13:07.744 "uuid": "181a7ee6-b377-40f9-9307-a60147818711", 00:13:07.744 "strip_size_kb": 64, 00:13:07.744 "state": "configuring", 00:13:07.744 "raid_level": "concat", 00:13:07.744 "superblock": true, 00:13:07.744 "num_base_bdevs": 4, 00:13:07.744 "num_base_bdevs_discovered": 2, 00:13:07.744 "num_base_bdevs_operational": 4, 00:13:07.744 "base_bdevs_list": [ 00:13:07.744 { 00:13:07.744 "name": null, 00:13:07.744 "uuid": "6ad2bae1-3779-4c2e-ab98-17207277e272", 00:13:07.744 "is_configured": false, 00:13:07.744 "data_offset": 0, 00:13:07.744 "data_size": 63488 00:13:07.744 }, 00:13:07.744 { 00:13:07.744 "name": null, 00:13:07.744 "uuid": "1a455241-60f7-4220-b81c-132d8f0a34d5", 00:13:07.744 "is_configured": false, 00:13:07.744 "data_offset": 0, 00:13:07.744 "data_size": 63488 00:13:07.744 }, 00:13:07.744 { 00:13:07.744 "name": "BaseBdev3", 00:13:07.744 "uuid": "0a23aff7-85e9-405d-90bf-ff7dfce00f9f", 00:13:07.744 "is_configured": true, 00:13:07.744 "data_offset": 2048, 00:13:07.744 "data_size": 63488 00:13:07.744 }, 00:13:07.744 { 00:13:07.744 "name": "BaseBdev4", 00:13:07.744 "uuid": "b6f74948-48cf-4fb7-a71f-606f3f8909f8", 00:13:07.744 "is_configured": true, 00:13:07.744 "data_offset": 2048, 00:13:07.744 "data_size": 63488 00:13:07.744 } 00:13:07.744 ] 00:13:07.744 }' 00:13:07.744 16:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.744 16:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.311 16:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:08.311 16:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.311 16:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.311 16:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.311 16:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.311 16:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:08.311 16:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:08.311 16:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.311 16:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.311 [2024-12-06 16:28:49.915936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:08.311 16:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.311 16:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:08.312 16:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:08.312 16:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:08.312 16:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:08.312 16:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:08.312 16:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:08.312 16:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.312 16:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.312 16:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.312 16:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.312 16:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:08.312 16:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.312 16:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.312 16:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.312 16:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.312 16:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.312 "name": "Existed_Raid", 00:13:08.312 "uuid": "181a7ee6-b377-40f9-9307-a60147818711", 00:13:08.312 "strip_size_kb": 64, 00:13:08.312 "state": "configuring", 00:13:08.312 "raid_level": "concat", 00:13:08.312 "superblock": true, 00:13:08.312 "num_base_bdevs": 4, 00:13:08.312 "num_base_bdevs_discovered": 3, 00:13:08.312 "num_base_bdevs_operational": 4, 00:13:08.312 "base_bdevs_list": [ 00:13:08.312 { 00:13:08.312 "name": null, 00:13:08.312 "uuid": "6ad2bae1-3779-4c2e-ab98-17207277e272", 00:13:08.312 "is_configured": false, 00:13:08.312 "data_offset": 0, 00:13:08.312 "data_size": 63488 00:13:08.312 }, 00:13:08.312 { 00:13:08.312 "name": "BaseBdev2", 00:13:08.312 "uuid": "1a455241-60f7-4220-b81c-132d8f0a34d5", 00:13:08.312 "is_configured": true, 00:13:08.312 "data_offset": 2048, 00:13:08.312 "data_size": 63488 00:13:08.312 }, 00:13:08.312 { 00:13:08.312 "name": "BaseBdev3", 00:13:08.312 "uuid": "0a23aff7-85e9-405d-90bf-ff7dfce00f9f", 00:13:08.312 "is_configured": true, 00:13:08.312 "data_offset": 2048, 00:13:08.312 "data_size": 63488 00:13:08.312 }, 00:13:08.312 { 00:13:08.312 "name": "BaseBdev4", 00:13:08.312 "uuid": "b6f74948-48cf-4fb7-a71f-606f3f8909f8", 00:13:08.312 "is_configured": true, 00:13:08.312 "data_offset": 2048, 00:13:08.312 "data_size": 63488 00:13:08.312 } 00:13:08.312 ] 00:13:08.312 }' 00:13:08.312 16:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.312 16:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.570 16:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.570 16:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:08.570 16:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.570 16:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.570 16:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.570 16:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:08.570 16:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:08.570 16:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.570 16:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.570 16:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.570 16:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.570 16:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6ad2bae1-3779-4c2e-ab98-17207277e272 00:13:08.570 16:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.570 16:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.829 [2024-12-06 16:28:50.414224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:08.829 [2024-12-06 16:28:50.414509] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:13:08.829 [2024-12-06 16:28:50.414560] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:08.829 [2024-12-06 16:28:50.414890] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:08.829 [2024-12-06 16:28:50.415054] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:13:08.829 NewBaseBdev 00:13:08.829 [2024-12-06 16:28:50.415101] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:13:08.829 [2024-12-06 16:28:50.415226] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:08.829 16:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.829 16:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:08.829 16:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:08.829 16:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:08.829 16:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:08.829 16:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:08.829 16:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:08.829 16:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:08.829 16:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.829 16:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.829 16:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.829 16:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:08.829 16:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.829 16:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.829 [ 00:13:08.829 { 00:13:08.829 "name": "NewBaseBdev", 00:13:08.830 "aliases": [ 00:13:08.830 "6ad2bae1-3779-4c2e-ab98-17207277e272" 00:13:08.830 ], 00:13:08.830 "product_name": "Malloc disk", 00:13:08.830 "block_size": 512, 00:13:08.830 "num_blocks": 65536, 00:13:08.830 "uuid": "6ad2bae1-3779-4c2e-ab98-17207277e272", 00:13:08.830 "assigned_rate_limits": { 00:13:08.830 "rw_ios_per_sec": 0, 00:13:08.830 "rw_mbytes_per_sec": 0, 00:13:08.830 "r_mbytes_per_sec": 0, 00:13:08.830 "w_mbytes_per_sec": 0 00:13:08.830 }, 00:13:08.830 "claimed": true, 00:13:08.830 "claim_type": "exclusive_write", 00:13:08.830 "zoned": false, 00:13:08.830 "supported_io_types": { 00:13:08.830 "read": true, 00:13:08.830 "write": true, 00:13:08.830 "unmap": true, 00:13:08.830 "flush": true, 00:13:08.830 "reset": true, 00:13:08.830 "nvme_admin": false, 00:13:08.830 "nvme_io": false, 00:13:08.830 "nvme_io_md": false, 00:13:08.830 "write_zeroes": true, 00:13:08.830 "zcopy": true, 00:13:08.830 "get_zone_info": false, 00:13:08.830 "zone_management": false, 00:13:08.830 "zone_append": false, 00:13:08.830 "compare": false, 00:13:08.830 "compare_and_write": false, 00:13:08.830 "abort": true, 00:13:08.830 "seek_hole": false, 00:13:08.830 "seek_data": false, 00:13:08.830 "copy": true, 00:13:08.830 "nvme_iov_md": false 00:13:08.830 }, 00:13:08.830 "memory_domains": [ 00:13:08.830 { 00:13:08.830 "dma_device_id": "system", 00:13:08.830 "dma_device_type": 1 00:13:08.830 }, 00:13:08.830 { 00:13:08.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.830 "dma_device_type": 2 00:13:08.830 } 00:13:08.830 ], 00:13:08.830 "driver_specific": {} 00:13:08.830 } 00:13:08.830 ] 00:13:08.830 16:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.830 16:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:08.830 16:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:13:08.830 16:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:08.830 16:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:08.830 16:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:08.830 16:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:08.830 16:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:08.830 16:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.830 16:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.830 16:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.830 16:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.830 16:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.830 16:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:08.830 16:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.830 16:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.830 16:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.830 16:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.830 "name": "Existed_Raid", 00:13:08.830 "uuid": "181a7ee6-b377-40f9-9307-a60147818711", 00:13:08.830 "strip_size_kb": 64, 00:13:08.830 "state": "online", 00:13:08.830 "raid_level": "concat", 00:13:08.830 "superblock": true, 00:13:08.830 "num_base_bdevs": 4, 00:13:08.830 "num_base_bdevs_discovered": 4, 00:13:08.830 "num_base_bdevs_operational": 4, 00:13:08.830 "base_bdevs_list": [ 00:13:08.830 { 00:13:08.830 "name": "NewBaseBdev", 00:13:08.830 "uuid": "6ad2bae1-3779-4c2e-ab98-17207277e272", 00:13:08.830 "is_configured": true, 00:13:08.830 "data_offset": 2048, 00:13:08.830 "data_size": 63488 00:13:08.830 }, 00:13:08.830 { 00:13:08.830 "name": "BaseBdev2", 00:13:08.830 "uuid": "1a455241-60f7-4220-b81c-132d8f0a34d5", 00:13:08.830 "is_configured": true, 00:13:08.830 "data_offset": 2048, 00:13:08.830 "data_size": 63488 00:13:08.830 }, 00:13:08.830 { 00:13:08.830 "name": "BaseBdev3", 00:13:08.830 "uuid": "0a23aff7-85e9-405d-90bf-ff7dfce00f9f", 00:13:08.830 "is_configured": true, 00:13:08.830 "data_offset": 2048, 00:13:08.830 "data_size": 63488 00:13:08.830 }, 00:13:08.830 { 00:13:08.830 "name": "BaseBdev4", 00:13:08.830 "uuid": "b6f74948-48cf-4fb7-a71f-606f3f8909f8", 00:13:08.830 "is_configured": true, 00:13:08.830 "data_offset": 2048, 00:13:08.830 "data_size": 63488 00:13:08.830 } 00:13:08.830 ] 00:13:08.830 }' 00:13:08.830 16:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.830 16:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.087 16:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:09.087 16:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:09.087 16:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:09.087 16:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:09.087 16:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:09.087 16:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:09.087 16:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:09.087 16:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.087 16:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.346 16:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:09.346 [2024-12-06 16:28:50.933779] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:09.346 16:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.346 16:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:09.346 "name": "Existed_Raid", 00:13:09.346 "aliases": [ 00:13:09.346 "181a7ee6-b377-40f9-9307-a60147818711" 00:13:09.346 ], 00:13:09.346 "product_name": "Raid Volume", 00:13:09.346 "block_size": 512, 00:13:09.346 "num_blocks": 253952, 00:13:09.346 "uuid": "181a7ee6-b377-40f9-9307-a60147818711", 00:13:09.346 "assigned_rate_limits": { 00:13:09.346 "rw_ios_per_sec": 0, 00:13:09.346 "rw_mbytes_per_sec": 0, 00:13:09.346 "r_mbytes_per_sec": 0, 00:13:09.346 "w_mbytes_per_sec": 0 00:13:09.346 }, 00:13:09.346 "claimed": false, 00:13:09.346 "zoned": false, 00:13:09.346 "supported_io_types": { 00:13:09.346 "read": true, 00:13:09.346 "write": true, 00:13:09.346 "unmap": true, 00:13:09.346 "flush": true, 00:13:09.346 "reset": true, 00:13:09.346 "nvme_admin": false, 00:13:09.346 "nvme_io": false, 00:13:09.346 "nvme_io_md": false, 00:13:09.346 "write_zeroes": true, 00:13:09.346 "zcopy": false, 00:13:09.346 "get_zone_info": false, 00:13:09.346 "zone_management": false, 00:13:09.346 "zone_append": false, 00:13:09.346 "compare": false, 00:13:09.346 "compare_and_write": false, 00:13:09.346 "abort": false, 00:13:09.346 "seek_hole": false, 00:13:09.346 "seek_data": false, 00:13:09.346 "copy": false, 00:13:09.346 "nvme_iov_md": false 00:13:09.346 }, 00:13:09.346 "memory_domains": [ 00:13:09.346 { 00:13:09.346 "dma_device_id": "system", 00:13:09.346 "dma_device_type": 1 00:13:09.346 }, 00:13:09.346 { 00:13:09.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:09.346 "dma_device_type": 2 00:13:09.346 }, 00:13:09.346 { 00:13:09.346 "dma_device_id": "system", 00:13:09.346 "dma_device_type": 1 00:13:09.346 }, 00:13:09.346 { 00:13:09.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:09.346 "dma_device_type": 2 00:13:09.346 }, 00:13:09.346 { 00:13:09.346 "dma_device_id": "system", 00:13:09.346 "dma_device_type": 1 00:13:09.346 }, 00:13:09.346 { 00:13:09.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:09.346 "dma_device_type": 2 00:13:09.346 }, 00:13:09.346 { 00:13:09.346 "dma_device_id": "system", 00:13:09.346 "dma_device_type": 1 00:13:09.346 }, 00:13:09.346 { 00:13:09.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:09.346 "dma_device_type": 2 00:13:09.346 } 00:13:09.346 ], 00:13:09.346 "driver_specific": { 00:13:09.346 "raid": { 00:13:09.346 "uuid": "181a7ee6-b377-40f9-9307-a60147818711", 00:13:09.346 "strip_size_kb": 64, 00:13:09.346 "state": "online", 00:13:09.346 "raid_level": "concat", 00:13:09.346 "superblock": true, 00:13:09.346 "num_base_bdevs": 4, 00:13:09.346 "num_base_bdevs_discovered": 4, 00:13:09.346 "num_base_bdevs_operational": 4, 00:13:09.346 "base_bdevs_list": [ 00:13:09.346 { 00:13:09.346 "name": "NewBaseBdev", 00:13:09.346 "uuid": "6ad2bae1-3779-4c2e-ab98-17207277e272", 00:13:09.346 "is_configured": true, 00:13:09.346 "data_offset": 2048, 00:13:09.346 "data_size": 63488 00:13:09.346 }, 00:13:09.346 { 00:13:09.346 "name": "BaseBdev2", 00:13:09.346 "uuid": "1a455241-60f7-4220-b81c-132d8f0a34d5", 00:13:09.346 "is_configured": true, 00:13:09.346 "data_offset": 2048, 00:13:09.346 "data_size": 63488 00:13:09.346 }, 00:13:09.346 { 00:13:09.346 "name": "BaseBdev3", 00:13:09.346 "uuid": "0a23aff7-85e9-405d-90bf-ff7dfce00f9f", 00:13:09.346 "is_configured": true, 00:13:09.346 "data_offset": 2048, 00:13:09.346 "data_size": 63488 00:13:09.346 }, 00:13:09.346 { 00:13:09.346 "name": "BaseBdev4", 00:13:09.346 "uuid": "b6f74948-48cf-4fb7-a71f-606f3f8909f8", 00:13:09.346 "is_configured": true, 00:13:09.346 "data_offset": 2048, 00:13:09.346 "data_size": 63488 00:13:09.346 } 00:13:09.346 ] 00:13:09.346 } 00:13:09.346 } 00:13:09.346 }' 00:13:09.346 16:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:09.346 16:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:09.346 BaseBdev2 00:13:09.346 BaseBdev3 00:13:09.346 BaseBdev4' 00:13:09.346 16:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:09.346 16:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:09.346 16:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:09.346 16:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:09.346 16:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:09.346 16:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.346 16:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.346 16:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.347 16:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:09.347 16:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:09.347 16:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:09.347 16:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:09.347 16:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:09.347 16:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.347 16:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.347 16:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.347 16:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:09.347 16:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:09.347 16:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:09.347 16:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:09.347 16:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.347 16:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.347 16:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:09.347 16:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.605 16:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:09.605 16:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:09.605 16:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:09.605 16:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:09.605 16:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.605 16:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.605 16:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:09.605 16:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.605 16:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:09.605 16:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:09.605 16:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:09.605 16:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.605 16:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.605 [2024-12-06 16:28:51.260831] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:09.605 [2024-12-06 16:28:51.260862] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:09.605 [2024-12-06 16:28:51.260942] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:09.605 [2024-12-06 16:28:51.261011] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:09.605 [2024-12-06 16:28:51.261021] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:13:09.605 16:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.605 16:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83221 00:13:09.605 16:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83221 ']' 00:13:09.605 16:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83221 00:13:09.605 16:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:09.605 16:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:09.605 16:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83221 00:13:09.606 16:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:09.606 16:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:09.606 16:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83221' 00:13:09.606 killing process with pid 83221 00:13:09.606 16:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83221 00:13:09.606 [2024-12-06 16:28:51.310969] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:09.606 16:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83221 00:13:09.606 [2024-12-06 16:28:51.352715] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:09.865 16:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:09.865 00:13:09.865 real 0m9.662s 00:13:09.865 user 0m16.556s 00:13:09.865 sys 0m2.051s 00:13:09.865 ************************************ 00:13:09.865 END TEST raid_state_function_test_sb 00:13:09.865 ************************************ 00:13:09.865 16:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:09.865 16:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.865 16:28:51 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:13:09.865 16:28:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:09.865 16:28:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:09.865 16:28:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:09.865 ************************************ 00:13:09.865 START TEST raid_superblock_test 00:13:09.865 ************************************ 00:13:09.865 16:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:13:09.865 16:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:13:09.865 16:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:13:09.865 16:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:09.865 16:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:09.865 16:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:09.865 16:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:09.865 16:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:09.865 16:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:09.865 16:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:09.865 16:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:09.865 16:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:09.865 16:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:09.865 16:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:09.865 16:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:13:09.865 16:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:09.865 16:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:09.865 16:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83869 00:13:09.865 16:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:09.865 16:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83869 00:13:09.865 16:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 83869 ']' 00:13:09.865 16:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.865 16:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:09.865 16:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.865 16:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:09.865 16:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.123 [2024-12-06 16:28:51.736434] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:13:10.123 [2024-12-06 16:28:51.736690] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83869 ] 00:13:10.123 [2024-12-06 16:28:51.925451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.123 [2024-12-06 16:28:51.953381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.382 [2024-12-06 16:28:51.995745] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:10.382 [2024-12-06 16:28:51.995871] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.949 malloc1 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.949 [2024-12-06 16:28:52.635598] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:10.949 [2024-12-06 16:28:52.635675] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.949 [2024-12-06 16:28:52.635707] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:10.949 [2024-12-06 16:28:52.635743] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.949 [2024-12-06 16:28:52.638420] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.949 [2024-12-06 16:28:52.638503] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:10.949 pt1 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.949 malloc2 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.949 [2024-12-06 16:28:52.668443] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:10.949 [2024-12-06 16:28:52.668558] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.949 [2024-12-06 16:28:52.668614] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:10.949 [2024-12-06 16:28:52.668653] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.949 [2024-12-06 16:28:52.671022] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.949 [2024-12-06 16:28:52.671099] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:10.949 pt2 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.949 malloc3 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.949 [2024-12-06 16:28:52.701324] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:10.949 [2024-12-06 16:28:52.701427] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.949 [2024-12-06 16:28:52.701485] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:10.949 [2024-12-06 16:28:52.701523] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.949 [2024-12-06 16:28:52.703865] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.949 [2024-12-06 16:28:52.703945] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:10.949 pt3 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.949 malloc4 00:13:10.949 16:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.950 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:10.950 16:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.950 16:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.950 [2024-12-06 16:28:52.741466] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:10.950 [2024-12-06 16:28:52.741587] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.950 [2024-12-06 16:28:52.741628] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:10.950 [2024-12-06 16:28:52.741662] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.950 [2024-12-06 16:28:52.743966] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.950 [2024-12-06 16:28:52.744050] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:10.950 pt4 00:13:10.950 16:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.950 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:10.950 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:10.950 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:13:10.950 16:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.950 16:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.950 [2024-12-06 16:28:52.757548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:10.950 [2024-12-06 16:28:52.759595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:10.950 [2024-12-06 16:28:52.759729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:10.950 [2024-12-06 16:28:52.759825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:10.950 [2024-12-06 16:28:52.760054] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:13:10.950 [2024-12-06 16:28:52.760108] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:10.950 [2024-12-06 16:28:52.760458] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:10.950 [2024-12-06 16:28:52.760685] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:13:10.950 [2024-12-06 16:28:52.760732] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:13:10.950 [2024-12-06 16:28:52.760937] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:10.950 16:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.950 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:10.950 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:10.950 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:10.950 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:10.950 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:10.950 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:10.950 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.950 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.950 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.950 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.950 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.950 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.950 16:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.950 16:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.209 16:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.209 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.209 "name": "raid_bdev1", 00:13:11.209 "uuid": "f3642d8d-6c86-429c-92be-9dad9113e465", 00:13:11.209 "strip_size_kb": 64, 00:13:11.209 "state": "online", 00:13:11.209 "raid_level": "concat", 00:13:11.209 "superblock": true, 00:13:11.209 "num_base_bdevs": 4, 00:13:11.209 "num_base_bdevs_discovered": 4, 00:13:11.209 "num_base_bdevs_operational": 4, 00:13:11.209 "base_bdevs_list": [ 00:13:11.209 { 00:13:11.209 "name": "pt1", 00:13:11.209 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:11.209 "is_configured": true, 00:13:11.209 "data_offset": 2048, 00:13:11.209 "data_size": 63488 00:13:11.209 }, 00:13:11.209 { 00:13:11.209 "name": "pt2", 00:13:11.209 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:11.209 "is_configured": true, 00:13:11.209 "data_offset": 2048, 00:13:11.209 "data_size": 63488 00:13:11.209 }, 00:13:11.209 { 00:13:11.209 "name": "pt3", 00:13:11.209 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:11.209 "is_configured": true, 00:13:11.209 "data_offset": 2048, 00:13:11.209 "data_size": 63488 00:13:11.209 }, 00:13:11.209 { 00:13:11.209 "name": "pt4", 00:13:11.209 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:11.209 "is_configured": true, 00:13:11.209 "data_offset": 2048, 00:13:11.209 "data_size": 63488 00:13:11.209 } 00:13:11.209 ] 00:13:11.209 }' 00:13:11.209 16:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.209 16:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.468 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:11.468 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:11.468 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:11.468 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:11.468 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:11.468 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:11.468 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:11.468 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:11.468 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.468 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.468 [2024-12-06 16:28:53.181277] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:11.468 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.468 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:11.468 "name": "raid_bdev1", 00:13:11.468 "aliases": [ 00:13:11.468 "f3642d8d-6c86-429c-92be-9dad9113e465" 00:13:11.468 ], 00:13:11.468 "product_name": "Raid Volume", 00:13:11.468 "block_size": 512, 00:13:11.468 "num_blocks": 253952, 00:13:11.468 "uuid": "f3642d8d-6c86-429c-92be-9dad9113e465", 00:13:11.468 "assigned_rate_limits": { 00:13:11.468 "rw_ios_per_sec": 0, 00:13:11.468 "rw_mbytes_per_sec": 0, 00:13:11.468 "r_mbytes_per_sec": 0, 00:13:11.468 "w_mbytes_per_sec": 0 00:13:11.468 }, 00:13:11.468 "claimed": false, 00:13:11.468 "zoned": false, 00:13:11.468 "supported_io_types": { 00:13:11.468 "read": true, 00:13:11.468 "write": true, 00:13:11.468 "unmap": true, 00:13:11.468 "flush": true, 00:13:11.468 "reset": true, 00:13:11.468 "nvme_admin": false, 00:13:11.468 "nvme_io": false, 00:13:11.468 "nvme_io_md": false, 00:13:11.468 "write_zeroes": true, 00:13:11.468 "zcopy": false, 00:13:11.468 "get_zone_info": false, 00:13:11.468 "zone_management": false, 00:13:11.468 "zone_append": false, 00:13:11.468 "compare": false, 00:13:11.468 "compare_and_write": false, 00:13:11.468 "abort": false, 00:13:11.468 "seek_hole": false, 00:13:11.468 "seek_data": false, 00:13:11.468 "copy": false, 00:13:11.468 "nvme_iov_md": false 00:13:11.468 }, 00:13:11.468 "memory_domains": [ 00:13:11.468 { 00:13:11.468 "dma_device_id": "system", 00:13:11.468 "dma_device_type": 1 00:13:11.468 }, 00:13:11.468 { 00:13:11.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.468 "dma_device_type": 2 00:13:11.468 }, 00:13:11.468 { 00:13:11.468 "dma_device_id": "system", 00:13:11.468 "dma_device_type": 1 00:13:11.468 }, 00:13:11.468 { 00:13:11.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.468 "dma_device_type": 2 00:13:11.468 }, 00:13:11.468 { 00:13:11.468 "dma_device_id": "system", 00:13:11.468 "dma_device_type": 1 00:13:11.468 }, 00:13:11.468 { 00:13:11.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.468 "dma_device_type": 2 00:13:11.468 }, 00:13:11.468 { 00:13:11.468 "dma_device_id": "system", 00:13:11.468 "dma_device_type": 1 00:13:11.468 }, 00:13:11.468 { 00:13:11.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.468 "dma_device_type": 2 00:13:11.468 } 00:13:11.468 ], 00:13:11.468 "driver_specific": { 00:13:11.468 "raid": { 00:13:11.468 "uuid": "f3642d8d-6c86-429c-92be-9dad9113e465", 00:13:11.468 "strip_size_kb": 64, 00:13:11.468 "state": "online", 00:13:11.468 "raid_level": "concat", 00:13:11.468 "superblock": true, 00:13:11.468 "num_base_bdevs": 4, 00:13:11.468 "num_base_bdevs_discovered": 4, 00:13:11.468 "num_base_bdevs_operational": 4, 00:13:11.468 "base_bdevs_list": [ 00:13:11.468 { 00:13:11.468 "name": "pt1", 00:13:11.468 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:11.468 "is_configured": true, 00:13:11.468 "data_offset": 2048, 00:13:11.468 "data_size": 63488 00:13:11.468 }, 00:13:11.468 { 00:13:11.468 "name": "pt2", 00:13:11.469 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:11.469 "is_configured": true, 00:13:11.469 "data_offset": 2048, 00:13:11.469 "data_size": 63488 00:13:11.469 }, 00:13:11.469 { 00:13:11.469 "name": "pt3", 00:13:11.469 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:11.469 "is_configured": true, 00:13:11.469 "data_offset": 2048, 00:13:11.469 "data_size": 63488 00:13:11.469 }, 00:13:11.469 { 00:13:11.469 "name": "pt4", 00:13:11.469 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:11.469 "is_configured": true, 00:13:11.469 "data_offset": 2048, 00:13:11.469 "data_size": 63488 00:13:11.469 } 00:13:11.469 ] 00:13:11.469 } 00:13:11.469 } 00:13:11.469 }' 00:13:11.469 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:11.469 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:11.469 pt2 00:13:11.469 pt3 00:13:11.469 pt4' 00:13:11.469 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:11.727 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:11.727 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:11.727 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:11.727 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.727 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.727 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:11.727 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.727 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:11.727 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:11.727 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:11.727 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:11.727 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.727 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:11.727 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.727 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.727 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:11.727 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:11.727 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:11.727 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:11.727 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:11.727 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.727 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.727 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.727 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:11.727 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:11.727 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:11.727 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:11.727 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.727 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.727 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:11.727 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.727 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:11.727 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:11.727 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:11.727 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.727 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.727 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:11.727 [2024-12-06 16:28:53.516741] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:11.727 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.727 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f3642d8d-6c86-429c-92be-9dad9113e465 00:13:11.727 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f3642d8d-6c86-429c-92be-9dad9113e465 ']' 00:13:11.727 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:11.727 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.727 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.987 [2024-12-06 16:28:53.568268] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:11.987 [2024-12-06 16:28:53.568311] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:11.987 [2024-12-06 16:28:53.568426] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:11.987 [2024-12-06 16:28:53.568512] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:11.987 [2024-12-06 16:28:53.568526] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:13:11.987 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.987 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.987 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:11.987 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.987 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.987 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.987 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:11.987 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:11.987 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:11.987 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:11.987 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.987 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.987 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.987 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:11.987 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:11.987 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.987 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.987 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.987 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:11.987 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:11.987 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.987 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.987 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.987 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:11.987 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:13:11.987 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.987 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.987 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.987 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:11.987 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.987 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:11.987 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.987 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.987 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:11.987 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:11.987 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:13:11.987 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:11.987 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:11.987 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:11.987 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:11.987 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:11.987 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:11.987 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.987 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.987 [2024-12-06 16:28:53.735992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:11.987 [2024-12-06 16:28:53.738171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:11.987 [2024-12-06 16:28:53.738303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:11.987 [2024-12-06 16:28:53.738344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:13:11.987 [2024-12-06 16:28:53.738397] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:11.987 [2024-12-06 16:28:53.738447] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:11.987 [2024-12-06 16:28:53.738486] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:11.987 [2024-12-06 16:28:53.738506] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:13:11.987 [2024-12-06 16:28:53.738523] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:11.987 [2024-12-06 16:28:53.738534] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:13:11.987 request: 00:13:11.987 { 00:13:11.987 "name": "raid_bdev1", 00:13:11.987 "raid_level": "concat", 00:13:11.987 "base_bdevs": [ 00:13:11.987 "malloc1", 00:13:11.987 "malloc2", 00:13:11.987 "malloc3", 00:13:11.987 "malloc4" 00:13:11.987 ], 00:13:11.987 "strip_size_kb": 64, 00:13:11.987 "superblock": false, 00:13:11.987 "method": "bdev_raid_create", 00:13:11.987 "req_id": 1 00:13:11.987 } 00:13:11.987 Got JSON-RPC error response 00:13:11.987 response: 00:13:11.987 { 00:13:11.987 "code": -17, 00:13:11.987 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:11.987 } 00:13:11.987 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:11.987 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:13:11.987 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:11.987 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:11.987 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:11.987 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.988 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.988 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.988 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:11.988 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.988 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:11.988 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:11.988 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:11.988 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.988 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.988 [2024-12-06 16:28:53.803793] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:11.988 [2024-12-06 16:28:53.803915] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.988 [2024-12-06 16:28:53.803962] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:11.988 [2024-12-06 16:28:53.804000] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.988 [2024-12-06 16:28:53.806347] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.988 [2024-12-06 16:28:53.806416] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:11.988 [2024-12-06 16:28:53.806519] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:11.988 [2024-12-06 16:28:53.806591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:11.988 pt1 00:13:11.988 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.988 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:13:11.988 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.988 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:11.988 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:11.988 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:11.988 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:11.988 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.988 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.988 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.988 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.988 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.988 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.988 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.988 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.246 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.246 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.246 "name": "raid_bdev1", 00:13:12.246 "uuid": "f3642d8d-6c86-429c-92be-9dad9113e465", 00:13:12.246 "strip_size_kb": 64, 00:13:12.246 "state": "configuring", 00:13:12.246 "raid_level": "concat", 00:13:12.246 "superblock": true, 00:13:12.246 "num_base_bdevs": 4, 00:13:12.246 "num_base_bdevs_discovered": 1, 00:13:12.246 "num_base_bdevs_operational": 4, 00:13:12.246 "base_bdevs_list": [ 00:13:12.246 { 00:13:12.246 "name": "pt1", 00:13:12.246 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:12.246 "is_configured": true, 00:13:12.246 "data_offset": 2048, 00:13:12.246 "data_size": 63488 00:13:12.246 }, 00:13:12.246 { 00:13:12.246 "name": null, 00:13:12.246 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:12.247 "is_configured": false, 00:13:12.247 "data_offset": 2048, 00:13:12.247 "data_size": 63488 00:13:12.247 }, 00:13:12.247 { 00:13:12.247 "name": null, 00:13:12.247 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:12.247 "is_configured": false, 00:13:12.247 "data_offset": 2048, 00:13:12.247 "data_size": 63488 00:13:12.247 }, 00:13:12.247 { 00:13:12.247 "name": null, 00:13:12.247 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:12.247 "is_configured": false, 00:13:12.247 "data_offset": 2048, 00:13:12.247 "data_size": 63488 00:13:12.247 } 00:13:12.247 ] 00:13:12.247 }' 00:13:12.247 16:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.247 16:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.506 16:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:13:12.506 16:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:12.506 16:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.506 16:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.506 [2024-12-06 16:28:54.295084] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:12.506 [2024-12-06 16:28:54.295227] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:12.506 [2024-12-06 16:28:54.295307] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:12.506 [2024-12-06 16:28:54.295350] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:12.506 [2024-12-06 16:28:54.295849] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:12.506 [2024-12-06 16:28:54.295923] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:12.506 [2024-12-06 16:28:54.296042] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:12.506 [2024-12-06 16:28:54.296097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:12.506 pt2 00:13:12.506 16:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.506 16:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:12.506 16:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.506 16:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.506 [2024-12-06 16:28:54.307058] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:12.506 16:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.506 16:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:13:12.506 16:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:12.506 16:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:12.506 16:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:12.506 16:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:12.506 16:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:12.506 16:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.506 16:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.506 16:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.506 16:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.506 16:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.506 16:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.506 16:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.506 16:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.506 16:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.765 16:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.765 "name": "raid_bdev1", 00:13:12.765 "uuid": "f3642d8d-6c86-429c-92be-9dad9113e465", 00:13:12.765 "strip_size_kb": 64, 00:13:12.765 "state": "configuring", 00:13:12.765 "raid_level": "concat", 00:13:12.765 "superblock": true, 00:13:12.765 "num_base_bdevs": 4, 00:13:12.765 "num_base_bdevs_discovered": 1, 00:13:12.765 "num_base_bdevs_operational": 4, 00:13:12.765 "base_bdevs_list": [ 00:13:12.765 { 00:13:12.765 "name": "pt1", 00:13:12.765 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:12.765 "is_configured": true, 00:13:12.765 "data_offset": 2048, 00:13:12.765 "data_size": 63488 00:13:12.765 }, 00:13:12.765 { 00:13:12.765 "name": null, 00:13:12.765 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:12.765 "is_configured": false, 00:13:12.765 "data_offset": 0, 00:13:12.765 "data_size": 63488 00:13:12.765 }, 00:13:12.765 { 00:13:12.765 "name": null, 00:13:12.765 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:12.765 "is_configured": false, 00:13:12.765 "data_offset": 2048, 00:13:12.765 "data_size": 63488 00:13:12.765 }, 00:13:12.765 { 00:13:12.765 "name": null, 00:13:12.765 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:12.765 "is_configured": false, 00:13:12.765 "data_offset": 2048, 00:13:12.765 "data_size": 63488 00:13:12.765 } 00:13:12.765 ] 00:13:12.765 }' 00:13:12.765 16:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.765 16:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.024 16:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:13.025 16:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:13.025 16:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:13.025 16:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.025 16:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.025 [2024-12-06 16:28:54.802195] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:13.025 [2024-12-06 16:28:54.802294] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.025 [2024-12-06 16:28:54.802313] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:13.025 [2024-12-06 16:28:54.802326] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.025 [2024-12-06 16:28:54.802753] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.025 [2024-12-06 16:28:54.802773] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:13.025 [2024-12-06 16:28:54.802849] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:13.025 [2024-12-06 16:28:54.802874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:13.025 pt2 00:13:13.025 16:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.025 16:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:13.025 16:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:13.025 16:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:13.025 16:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.025 16:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.025 [2024-12-06 16:28:54.810171] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:13.025 [2024-12-06 16:28:54.810269] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.025 [2024-12-06 16:28:54.810295] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:13.025 [2024-12-06 16:28:54.810309] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.025 [2024-12-06 16:28:54.810708] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.025 [2024-12-06 16:28:54.810726] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:13.025 [2024-12-06 16:28:54.810796] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:13.025 [2024-12-06 16:28:54.810825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:13.025 pt3 00:13:13.025 16:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.025 16:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:13.025 16:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:13.025 16:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:13.025 16:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.025 16:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.025 [2024-12-06 16:28:54.818164] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:13.025 [2024-12-06 16:28:54.818251] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.025 [2024-12-06 16:28:54.818271] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:13.025 [2024-12-06 16:28:54.818282] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.025 [2024-12-06 16:28:54.818680] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.025 [2024-12-06 16:28:54.818709] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:13.025 [2024-12-06 16:28:54.818780] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:13.025 [2024-12-06 16:28:54.818806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:13.025 [2024-12-06 16:28:54.818922] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:13:13.025 [2024-12-06 16:28:54.818937] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:13.025 [2024-12-06 16:28:54.819193] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:13.025 [2024-12-06 16:28:54.819349] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:13:13.025 [2024-12-06 16:28:54.819361] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:13:13.025 [2024-12-06 16:28:54.819475] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:13.025 pt4 00:13:13.025 16:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.025 16:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:13.025 16:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:13.025 16:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:13.025 16:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:13.025 16:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:13.025 16:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:13.025 16:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:13.025 16:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:13.025 16:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.025 16:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.025 16:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.025 16:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.025 16:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.025 16:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.025 16:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.025 16:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.025 16:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.285 16:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.285 "name": "raid_bdev1", 00:13:13.285 "uuid": "f3642d8d-6c86-429c-92be-9dad9113e465", 00:13:13.285 "strip_size_kb": 64, 00:13:13.285 "state": "online", 00:13:13.285 "raid_level": "concat", 00:13:13.285 "superblock": true, 00:13:13.285 "num_base_bdevs": 4, 00:13:13.285 "num_base_bdevs_discovered": 4, 00:13:13.285 "num_base_bdevs_operational": 4, 00:13:13.285 "base_bdevs_list": [ 00:13:13.285 { 00:13:13.285 "name": "pt1", 00:13:13.285 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:13.285 "is_configured": true, 00:13:13.285 "data_offset": 2048, 00:13:13.285 "data_size": 63488 00:13:13.285 }, 00:13:13.285 { 00:13:13.285 "name": "pt2", 00:13:13.285 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:13.285 "is_configured": true, 00:13:13.285 "data_offset": 2048, 00:13:13.285 "data_size": 63488 00:13:13.285 }, 00:13:13.286 { 00:13:13.286 "name": "pt3", 00:13:13.286 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:13.286 "is_configured": true, 00:13:13.286 "data_offset": 2048, 00:13:13.286 "data_size": 63488 00:13:13.286 }, 00:13:13.286 { 00:13:13.286 "name": "pt4", 00:13:13.286 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:13.286 "is_configured": true, 00:13:13.286 "data_offset": 2048, 00:13:13.286 "data_size": 63488 00:13:13.286 } 00:13:13.286 ] 00:13:13.286 }' 00:13:13.286 16:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.286 16:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.548 16:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:13.548 16:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:13.548 16:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:13.548 16:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:13.548 16:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:13.548 16:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:13.548 16:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:13.548 16:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.548 16:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.548 16:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:13.549 [2024-12-06 16:28:55.241801] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:13.549 16:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.549 16:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:13.549 "name": "raid_bdev1", 00:13:13.549 "aliases": [ 00:13:13.549 "f3642d8d-6c86-429c-92be-9dad9113e465" 00:13:13.549 ], 00:13:13.549 "product_name": "Raid Volume", 00:13:13.549 "block_size": 512, 00:13:13.549 "num_blocks": 253952, 00:13:13.549 "uuid": "f3642d8d-6c86-429c-92be-9dad9113e465", 00:13:13.549 "assigned_rate_limits": { 00:13:13.549 "rw_ios_per_sec": 0, 00:13:13.549 "rw_mbytes_per_sec": 0, 00:13:13.549 "r_mbytes_per_sec": 0, 00:13:13.549 "w_mbytes_per_sec": 0 00:13:13.549 }, 00:13:13.549 "claimed": false, 00:13:13.549 "zoned": false, 00:13:13.549 "supported_io_types": { 00:13:13.549 "read": true, 00:13:13.549 "write": true, 00:13:13.549 "unmap": true, 00:13:13.549 "flush": true, 00:13:13.549 "reset": true, 00:13:13.549 "nvme_admin": false, 00:13:13.549 "nvme_io": false, 00:13:13.549 "nvme_io_md": false, 00:13:13.549 "write_zeroes": true, 00:13:13.549 "zcopy": false, 00:13:13.549 "get_zone_info": false, 00:13:13.549 "zone_management": false, 00:13:13.549 "zone_append": false, 00:13:13.549 "compare": false, 00:13:13.549 "compare_and_write": false, 00:13:13.549 "abort": false, 00:13:13.549 "seek_hole": false, 00:13:13.549 "seek_data": false, 00:13:13.549 "copy": false, 00:13:13.549 "nvme_iov_md": false 00:13:13.549 }, 00:13:13.549 "memory_domains": [ 00:13:13.549 { 00:13:13.549 "dma_device_id": "system", 00:13:13.549 "dma_device_type": 1 00:13:13.549 }, 00:13:13.549 { 00:13:13.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.549 "dma_device_type": 2 00:13:13.549 }, 00:13:13.549 { 00:13:13.549 "dma_device_id": "system", 00:13:13.549 "dma_device_type": 1 00:13:13.549 }, 00:13:13.549 { 00:13:13.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.549 "dma_device_type": 2 00:13:13.549 }, 00:13:13.549 { 00:13:13.549 "dma_device_id": "system", 00:13:13.549 "dma_device_type": 1 00:13:13.549 }, 00:13:13.549 { 00:13:13.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.549 "dma_device_type": 2 00:13:13.549 }, 00:13:13.549 { 00:13:13.549 "dma_device_id": "system", 00:13:13.549 "dma_device_type": 1 00:13:13.549 }, 00:13:13.549 { 00:13:13.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.549 "dma_device_type": 2 00:13:13.549 } 00:13:13.549 ], 00:13:13.549 "driver_specific": { 00:13:13.549 "raid": { 00:13:13.549 "uuid": "f3642d8d-6c86-429c-92be-9dad9113e465", 00:13:13.549 "strip_size_kb": 64, 00:13:13.549 "state": "online", 00:13:13.549 "raid_level": "concat", 00:13:13.549 "superblock": true, 00:13:13.549 "num_base_bdevs": 4, 00:13:13.549 "num_base_bdevs_discovered": 4, 00:13:13.549 "num_base_bdevs_operational": 4, 00:13:13.549 "base_bdevs_list": [ 00:13:13.549 { 00:13:13.549 "name": "pt1", 00:13:13.549 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:13.549 "is_configured": true, 00:13:13.549 "data_offset": 2048, 00:13:13.549 "data_size": 63488 00:13:13.549 }, 00:13:13.549 { 00:13:13.549 "name": "pt2", 00:13:13.549 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:13.549 "is_configured": true, 00:13:13.549 "data_offset": 2048, 00:13:13.549 "data_size": 63488 00:13:13.549 }, 00:13:13.549 { 00:13:13.549 "name": "pt3", 00:13:13.549 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:13.549 "is_configured": true, 00:13:13.549 "data_offset": 2048, 00:13:13.549 "data_size": 63488 00:13:13.549 }, 00:13:13.549 { 00:13:13.549 "name": "pt4", 00:13:13.549 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:13.549 "is_configured": true, 00:13:13.549 "data_offset": 2048, 00:13:13.549 "data_size": 63488 00:13:13.549 } 00:13:13.549 ] 00:13:13.549 } 00:13:13.549 } 00:13:13.549 }' 00:13:13.549 16:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:13.549 16:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:13.549 pt2 00:13:13.549 pt3 00:13:13.549 pt4' 00:13:13.549 16:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:13.549 16:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:13.549 16:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:13.549 16:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:13.549 16:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:13.549 16:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.549 16:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.809 16:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.809 16:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:13.809 16:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:13.809 16:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:13.809 16:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:13.809 16:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:13.809 16:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.809 16:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.809 16:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.809 16:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:13.809 16:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:13.809 16:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:13.809 16:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:13.809 16:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.809 16:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.809 16:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:13.809 16:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.809 16:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:13.809 16:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:13.809 16:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:13.809 16:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:13.809 16:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.809 16:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:13.809 16:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.809 16:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.809 16:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:13.809 16:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:13.809 16:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:13.809 16:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:13.809 16:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.809 16:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.809 [2024-12-06 16:28:55.529437] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:13.809 16:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.809 16:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f3642d8d-6c86-429c-92be-9dad9113e465 '!=' f3642d8d-6c86-429c-92be-9dad9113e465 ']' 00:13:13.809 16:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:13:13.809 16:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:13.809 16:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:13.809 16:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83869 00:13:13.809 16:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 83869 ']' 00:13:13.809 16:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 83869 00:13:13.809 16:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:13:13.809 16:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:13.809 16:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83869 00:13:13.809 16:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:13.809 16:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:13.809 16:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83869' 00:13:13.809 killing process with pid 83869 00:13:13.809 16:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 83869 00:13:13.809 [2024-12-06 16:28:55.614891] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:13.809 16:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 83869 00:13:13.809 [2024-12-06 16:28:55.615110] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:13.809 [2024-12-06 16:28:55.615218] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:13.809 [2024-12-06 16:28:55.615293] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:13:14.067 [2024-12-06 16:28:55.661770] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:14.067 16:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:14.067 00:13:14.067 real 0m4.229s 00:13:14.067 user 0m6.670s 00:13:14.067 sys 0m0.930s 00:13:14.067 16:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:14.067 16:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.067 ************************************ 00:13:14.067 END TEST raid_superblock_test 00:13:14.067 ************************************ 00:13:14.371 16:28:55 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:13:14.371 16:28:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:14.371 16:28:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:14.371 16:28:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:14.371 ************************************ 00:13:14.371 START TEST raid_read_error_test 00:13:14.371 ************************************ 00:13:14.371 16:28:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:13:14.371 16:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:13:14.371 16:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:14.371 16:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:14.371 16:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:14.371 16:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:14.371 16:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:14.371 16:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:14.371 16:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:14.371 16:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:14.371 16:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:14.371 16:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:14.371 16:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:14.371 16:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:14.371 16:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:14.371 16:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:14.371 16:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:14.371 16:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:14.371 16:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:14.371 16:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:14.371 16:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:14.371 16:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:14.371 16:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:14.371 16:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:14.371 16:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:14.371 16:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:13:14.371 16:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:14.371 16:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:14.371 16:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:14.371 16:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.a2WTE1jKXA 00:13:14.371 16:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=84123 00:13:14.371 16:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:14.371 16:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 84123 00:13:14.371 16:28:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 84123 ']' 00:13:14.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.371 16:28:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.371 16:28:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:14.371 16:28:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.371 16:28:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:14.371 16:28:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.371 [2024-12-06 16:28:56.045871] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:13:14.371 [2024-12-06 16:28:56.046000] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84123 ] 00:13:14.675 [2024-12-06 16:28:56.218807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.675 [2024-12-06 16:28:56.246704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.675 [2024-12-06 16:28:56.289686] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:14.675 [2024-12-06 16:28:56.289725] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:15.244 16:28:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:15.244 16:28:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:15.244 16:28:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:15.244 16:28:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:15.244 16:28:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.244 16:28:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.244 BaseBdev1_malloc 00:13:15.244 16:28:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.244 16:28:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:15.244 16:28:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.244 16:28:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.244 true 00:13:15.244 16:28:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.245 16:28:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:15.245 16:28:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.245 16:28:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.245 [2024-12-06 16:28:56.989790] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:15.245 [2024-12-06 16:28:56.989850] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.245 [2024-12-06 16:28:56.989874] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:15.245 [2024-12-06 16:28:56.989884] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.245 [2024-12-06 16:28:56.992401] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.245 [2024-12-06 16:28:56.992438] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:15.245 BaseBdev1 00:13:15.245 16:28:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.245 16:28:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:15.245 16:28:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:15.245 16:28:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.245 16:28:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.245 BaseBdev2_malloc 00:13:15.245 16:28:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.245 16:28:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:15.245 16:28:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.245 16:28:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.245 true 00:13:15.245 16:28:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.245 16:28:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:15.245 16:28:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.245 16:28:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.245 [2024-12-06 16:28:57.030666] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:15.245 [2024-12-06 16:28:57.030718] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.245 [2024-12-06 16:28:57.030737] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:15.245 [2024-12-06 16:28:57.030746] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.245 [2024-12-06 16:28:57.033179] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.245 [2024-12-06 16:28:57.033224] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:15.245 BaseBdev2 00:13:15.245 16:28:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.245 16:28:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:15.245 16:28:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:15.245 16:28:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.245 16:28:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.245 BaseBdev3_malloc 00:13:15.245 16:28:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.245 16:28:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:15.245 16:28:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.245 16:28:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.245 true 00:13:15.245 16:28:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.245 16:28:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:15.245 16:28:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.245 16:28:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.245 [2024-12-06 16:28:57.071454] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:15.245 [2024-12-06 16:28:57.071508] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.245 [2024-12-06 16:28:57.071539] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:15.245 [2024-12-06 16:28:57.071567] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.245 [2024-12-06 16:28:57.073947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.245 [2024-12-06 16:28:57.073983] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:15.245 BaseBdev3 00:13:15.245 16:28:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.245 16:28:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:15.245 16:28:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:15.245 16:28:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.245 16:28:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.504 BaseBdev4_malloc 00:13:15.504 16:28:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.504 16:28:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:15.504 16:28:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.504 16:28:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.504 true 00:13:15.504 16:28:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.504 16:28:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:15.504 16:28:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.504 16:28:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.504 [2024-12-06 16:28:57.123305] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:15.504 [2024-12-06 16:28:57.123356] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.504 [2024-12-06 16:28:57.123378] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:15.504 [2024-12-06 16:28:57.123386] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.504 [2024-12-06 16:28:57.125694] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.504 [2024-12-06 16:28:57.125728] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:15.504 BaseBdev4 00:13:15.504 16:28:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.504 16:28:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:15.504 16:28:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.504 16:28:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.504 [2024-12-06 16:28:57.135352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:15.504 [2024-12-06 16:28:57.137302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:15.504 [2024-12-06 16:28:57.137400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:15.504 [2024-12-06 16:28:57.137460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:15.504 [2024-12-06 16:28:57.137699] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:13:15.504 [2024-12-06 16:28:57.137712] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:15.504 [2024-12-06 16:28:57.138006] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:13:15.504 [2024-12-06 16:28:57.138141] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:13:15.504 [2024-12-06 16:28:57.138159] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:13:15.504 [2024-12-06 16:28:57.138308] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:15.504 16:28:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.504 16:28:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:15.504 16:28:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:15.504 16:28:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:15.504 16:28:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:15.504 16:28:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:15.504 16:28:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:15.504 16:28:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.504 16:28:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.504 16:28:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.504 16:28:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.504 16:28:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.504 16:28:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.504 16:28:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.504 16:28:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.504 16:28:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.504 16:28:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.504 "name": "raid_bdev1", 00:13:15.504 "uuid": "41c8640b-cbeb-49ef-a056-912c0c2afa27", 00:13:15.504 "strip_size_kb": 64, 00:13:15.504 "state": "online", 00:13:15.504 "raid_level": "concat", 00:13:15.504 "superblock": true, 00:13:15.504 "num_base_bdevs": 4, 00:13:15.504 "num_base_bdevs_discovered": 4, 00:13:15.504 "num_base_bdevs_operational": 4, 00:13:15.504 "base_bdevs_list": [ 00:13:15.504 { 00:13:15.504 "name": "BaseBdev1", 00:13:15.504 "uuid": "59e05ffb-c052-540a-8ef4-a9d752000175", 00:13:15.504 "is_configured": true, 00:13:15.504 "data_offset": 2048, 00:13:15.505 "data_size": 63488 00:13:15.505 }, 00:13:15.505 { 00:13:15.505 "name": "BaseBdev2", 00:13:15.505 "uuid": "73b8fbbe-fe97-544a-9333-3f10ec67455a", 00:13:15.505 "is_configured": true, 00:13:15.505 "data_offset": 2048, 00:13:15.505 "data_size": 63488 00:13:15.505 }, 00:13:15.505 { 00:13:15.505 "name": "BaseBdev3", 00:13:15.505 "uuid": "238b82e2-79a9-5d42-8b07-646afce8cc88", 00:13:15.505 "is_configured": true, 00:13:15.505 "data_offset": 2048, 00:13:15.505 "data_size": 63488 00:13:15.505 }, 00:13:15.505 { 00:13:15.505 "name": "BaseBdev4", 00:13:15.505 "uuid": "f7747f97-d46b-5084-95e0-e3ca821cc14a", 00:13:15.505 "is_configured": true, 00:13:15.505 "data_offset": 2048, 00:13:15.505 "data_size": 63488 00:13:15.505 } 00:13:15.505 ] 00:13:15.505 }' 00:13:15.505 16:28:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.505 16:28:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.763 16:28:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:15.763 16:28:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:16.023 [2024-12-06 16:28:57.678813] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:16.962 16:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:16.962 16:28:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.962 16:28:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.962 16:28:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.962 16:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:16.962 16:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:16.962 16:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:16.962 16:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:16.962 16:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.962 16:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.962 16:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:16.962 16:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:16.962 16:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:16.962 16:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.962 16:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.962 16:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.962 16:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.962 16:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.962 16:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.962 16:28:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.962 16:28:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.962 16:28:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.962 16:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.962 "name": "raid_bdev1", 00:13:16.962 "uuid": "41c8640b-cbeb-49ef-a056-912c0c2afa27", 00:13:16.962 "strip_size_kb": 64, 00:13:16.962 "state": "online", 00:13:16.962 "raid_level": "concat", 00:13:16.962 "superblock": true, 00:13:16.962 "num_base_bdevs": 4, 00:13:16.962 "num_base_bdevs_discovered": 4, 00:13:16.962 "num_base_bdevs_operational": 4, 00:13:16.962 "base_bdevs_list": [ 00:13:16.962 { 00:13:16.962 "name": "BaseBdev1", 00:13:16.962 "uuid": "59e05ffb-c052-540a-8ef4-a9d752000175", 00:13:16.962 "is_configured": true, 00:13:16.962 "data_offset": 2048, 00:13:16.962 "data_size": 63488 00:13:16.962 }, 00:13:16.962 { 00:13:16.962 "name": "BaseBdev2", 00:13:16.962 "uuid": "73b8fbbe-fe97-544a-9333-3f10ec67455a", 00:13:16.963 "is_configured": true, 00:13:16.963 "data_offset": 2048, 00:13:16.963 "data_size": 63488 00:13:16.963 }, 00:13:16.963 { 00:13:16.963 "name": "BaseBdev3", 00:13:16.963 "uuid": "238b82e2-79a9-5d42-8b07-646afce8cc88", 00:13:16.963 "is_configured": true, 00:13:16.963 "data_offset": 2048, 00:13:16.963 "data_size": 63488 00:13:16.963 }, 00:13:16.963 { 00:13:16.963 "name": "BaseBdev4", 00:13:16.963 "uuid": "f7747f97-d46b-5084-95e0-e3ca821cc14a", 00:13:16.963 "is_configured": true, 00:13:16.963 "data_offset": 2048, 00:13:16.963 "data_size": 63488 00:13:16.963 } 00:13:16.963 ] 00:13:16.963 }' 00:13:16.963 16:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.963 16:28:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.222 16:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:17.222 16:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.222 16:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.222 [2024-12-06 16:28:59.059834] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:17.482 [2024-12-06 16:28:59.059925] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:17.482 [2024-12-06 16:28:59.063122] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:17.482 [2024-12-06 16:28:59.063278] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:17.482 [2024-12-06 16:28:59.063341] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:17.482 [2024-12-06 16:28:59.063353] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:13:17.482 { 00:13:17.482 "results": [ 00:13:17.482 { 00:13:17.482 "job": "raid_bdev1", 00:13:17.482 "core_mask": "0x1", 00:13:17.482 "workload": "randrw", 00:13:17.482 "percentage": 50, 00:13:17.482 "status": "finished", 00:13:17.482 "queue_depth": 1, 00:13:17.482 "io_size": 131072, 00:13:17.482 "runtime": 1.381638, 00:13:17.482 "iops": 14583.41475842442, 00:13:17.482 "mibps": 1822.9268448030525, 00:13:17.482 "io_failed": 1, 00:13:17.482 "io_timeout": 0, 00:13:17.483 "avg_latency_us": 94.69024378298134, 00:13:17.483 "min_latency_us": 28.05938864628821, 00:13:17.483 "max_latency_us": 1616.9362445414847 00:13:17.483 } 00:13:17.483 ], 00:13:17.483 "core_count": 1 00:13:17.483 } 00:13:17.483 16:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.483 16:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 84123 00:13:17.483 16:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 84123 ']' 00:13:17.483 16:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 84123 00:13:17.483 16:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:13:17.483 16:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:17.483 16:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84123 00:13:17.483 16:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:17.483 16:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:17.483 killing process with pid 84123 00:13:17.483 16:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84123' 00:13:17.483 16:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 84123 00:13:17.483 [2024-12-06 16:28:59.108646] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:17.483 16:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 84123 00:13:17.483 [2024-12-06 16:28:59.145113] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:17.742 16:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:17.742 16:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.a2WTE1jKXA 00:13:17.742 16:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:17.742 16:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:13:17.742 16:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:13:17.742 ************************************ 00:13:17.742 END TEST raid_read_error_test 00:13:17.742 ************************************ 00:13:17.742 16:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:17.742 16:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:17.742 16:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:13:17.742 00:13:17.742 real 0m3.424s 00:13:17.742 user 0m4.410s 00:13:17.742 sys 0m0.530s 00:13:17.742 16:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:17.742 16:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.742 16:28:59 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:13:17.742 16:28:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:17.742 16:28:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:17.742 16:28:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:17.742 ************************************ 00:13:17.742 START TEST raid_write_error_test 00:13:17.742 ************************************ 00:13:17.742 16:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:13:17.742 16:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:13:17.742 16:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:17.742 16:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:17.742 16:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:17.742 16:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:17.742 16:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:17.742 16:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:17.742 16:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:17.742 16:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:17.742 16:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:17.742 16:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:17.742 16:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:17.742 16:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:17.742 16:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:17.742 16:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:17.742 16:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:17.742 16:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:17.743 16:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:17.743 16:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:17.743 16:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:17.743 16:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:17.743 16:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:17.743 16:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:17.743 16:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:17.743 16:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:13:17.743 16:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:17.743 16:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:17.743 16:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:17.743 16:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ai8Bj9koyU 00:13:17.743 16:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=84252 00:13:17.743 16:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:17.743 16:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 84252 00:13:17.743 16:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 84252 ']' 00:13:17.743 16:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.743 16:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:17.743 16:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.743 16:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:17.743 16:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.743 [2024-12-06 16:28:59.545095] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:13:17.743 [2024-12-06 16:28:59.545354] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84252 ] 00:13:18.002 [2024-12-06 16:28:59.722367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.002 [2024-12-06 16:28:59.750276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.002 [2024-12-06 16:28:59.793322] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:18.002 [2024-12-06 16:28:59.793444] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.939 BaseBdev1_malloc 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.939 true 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.939 [2024-12-06 16:29:00.449424] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:18.939 [2024-12-06 16:29:00.449482] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:18.939 [2024-12-06 16:29:00.449519] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:18.939 [2024-12-06 16:29:00.449528] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:18.939 [2024-12-06 16:29:00.452102] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:18.939 [2024-12-06 16:29:00.452226] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:18.939 BaseBdev1 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.939 BaseBdev2_malloc 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.939 true 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.939 [2024-12-06 16:29:00.490476] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:18.939 [2024-12-06 16:29:00.490531] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:18.939 [2024-12-06 16:29:00.490565] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:18.939 [2024-12-06 16:29:00.490574] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:18.939 [2024-12-06 16:29:00.492852] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:18.939 [2024-12-06 16:29:00.492934] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:18.939 BaseBdev2 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.939 BaseBdev3_malloc 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.939 true 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.939 [2024-12-06 16:29:00.531558] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:18.939 [2024-12-06 16:29:00.531665] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:18.939 [2024-12-06 16:29:00.531691] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:18.939 [2024-12-06 16:29:00.531701] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:18.939 [2024-12-06 16:29:00.533945] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:18.939 [2024-12-06 16:29:00.533981] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:18.939 BaseBdev3 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.939 BaseBdev4_malloc 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.939 true 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.939 [2024-12-06 16:29:00.581260] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:18.939 [2024-12-06 16:29:00.581308] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:18.939 [2024-12-06 16:29:00.581330] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:18.939 [2024-12-06 16:29:00.581338] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:18.939 [2024-12-06 16:29:00.583378] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:18.939 [2024-12-06 16:29:00.583463] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:18.939 BaseBdev4 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.939 [2024-12-06 16:29:00.593315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:18.939 [2024-12-06 16:29:00.595417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:18.939 [2024-12-06 16:29:00.595570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:18.939 [2024-12-06 16:29:00.595672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:18.939 [2024-12-06 16:29:00.595934] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:13:18.939 [2024-12-06 16:29:00.595991] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:18.939 [2024-12-06 16:29:00.596325] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:13:18.939 [2024-12-06 16:29:00.596523] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:13:18.939 [2024-12-06 16:29:00.596585] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:13:18.939 [2024-12-06 16:29:00.596781] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.939 16:29:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.940 16:29:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.940 16:29:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.940 16:29:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.940 16:29:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.940 16:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.940 16:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.940 16:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.940 16:29:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.940 "name": "raid_bdev1", 00:13:18.940 "uuid": "24d8a9e8-3202-45c0-a304-98856e03d955", 00:13:18.940 "strip_size_kb": 64, 00:13:18.940 "state": "online", 00:13:18.940 "raid_level": "concat", 00:13:18.940 "superblock": true, 00:13:18.940 "num_base_bdevs": 4, 00:13:18.940 "num_base_bdevs_discovered": 4, 00:13:18.940 "num_base_bdevs_operational": 4, 00:13:18.940 "base_bdevs_list": [ 00:13:18.940 { 00:13:18.940 "name": "BaseBdev1", 00:13:18.940 "uuid": "446d7b53-7bc8-57a7-b97a-b49db7ed2a64", 00:13:18.940 "is_configured": true, 00:13:18.940 "data_offset": 2048, 00:13:18.940 "data_size": 63488 00:13:18.940 }, 00:13:18.940 { 00:13:18.940 "name": "BaseBdev2", 00:13:18.940 "uuid": "e0b4c5e8-2725-51ed-abed-aed4217f3ee3", 00:13:18.940 "is_configured": true, 00:13:18.940 "data_offset": 2048, 00:13:18.940 "data_size": 63488 00:13:18.940 }, 00:13:18.940 { 00:13:18.940 "name": "BaseBdev3", 00:13:18.940 "uuid": "7b69d04f-2726-5774-9b3a-00aaaf26878d", 00:13:18.940 "is_configured": true, 00:13:18.940 "data_offset": 2048, 00:13:18.940 "data_size": 63488 00:13:18.940 }, 00:13:18.940 { 00:13:18.940 "name": "BaseBdev4", 00:13:18.940 "uuid": "fb97affc-402d-5d79-ac12-4698d481b6c1", 00:13:18.940 "is_configured": true, 00:13:18.940 "data_offset": 2048, 00:13:18.940 "data_size": 63488 00:13:18.940 } 00:13:18.940 ] 00:13:18.940 }' 00:13:18.940 16:29:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.940 16:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.508 16:29:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:19.508 16:29:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:19.508 [2024-12-06 16:29:01.172769] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:20.451 16:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:20.451 16:29:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.451 16:29:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.451 16:29:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.451 16:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:20.451 16:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:20.451 16:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:20.451 16:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:20.451 16:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:20.451 16:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:20.451 16:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:20.451 16:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:20.451 16:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:20.451 16:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.451 16:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.451 16:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.451 16:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.451 16:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.451 16:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.451 16:29:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.451 16:29:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.451 16:29:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.451 16:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.451 "name": "raid_bdev1", 00:13:20.451 "uuid": "24d8a9e8-3202-45c0-a304-98856e03d955", 00:13:20.451 "strip_size_kb": 64, 00:13:20.451 "state": "online", 00:13:20.451 "raid_level": "concat", 00:13:20.451 "superblock": true, 00:13:20.451 "num_base_bdevs": 4, 00:13:20.451 "num_base_bdevs_discovered": 4, 00:13:20.451 "num_base_bdevs_operational": 4, 00:13:20.451 "base_bdevs_list": [ 00:13:20.451 { 00:13:20.451 "name": "BaseBdev1", 00:13:20.451 "uuid": "446d7b53-7bc8-57a7-b97a-b49db7ed2a64", 00:13:20.451 "is_configured": true, 00:13:20.451 "data_offset": 2048, 00:13:20.451 "data_size": 63488 00:13:20.451 }, 00:13:20.451 { 00:13:20.451 "name": "BaseBdev2", 00:13:20.451 "uuid": "e0b4c5e8-2725-51ed-abed-aed4217f3ee3", 00:13:20.451 "is_configured": true, 00:13:20.451 "data_offset": 2048, 00:13:20.451 "data_size": 63488 00:13:20.451 }, 00:13:20.451 { 00:13:20.451 "name": "BaseBdev3", 00:13:20.451 "uuid": "7b69d04f-2726-5774-9b3a-00aaaf26878d", 00:13:20.451 "is_configured": true, 00:13:20.451 "data_offset": 2048, 00:13:20.451 "data_size": 63488 00:13:20.451 }, 00:13:20.451 { 00:13:20.451 "name": "BaseBdev4", 00:13:20.451 "uuid": "fb97affc-402d-5d79-ac12-4698d481b6c1", 00:13:20.451 "is_configured": true, 00:13:20.451 "data_offset": 2048, 00:13:20.451 "data_size": 63488 00:13:20.451 } 00:13:20.451 ] 00:13:20.451 }' 00:13:20.451 16:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.451 16:29:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.710 16:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:20.710 16:29:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.710 16:29:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.710 [2024-12-06 16:29:02.513530] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:20.710 [2024-12-06 16:29:02.513623] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:20.710 [2024-12-06 16:29:02.516548] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:20.710 [2024-12-06 16:29:02.516664] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:20.710 [2024-12-06 16:29:02.516743] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:20.710 [2024-12-06 16:29:02.516794] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:13:20.710 { 00:13:20.710 "results": [ 00:13:20.710 { 00:13:20.710 "job": "raid_bdev1", 00:13:20.710 "core_mask": "0x1", 00:13:20.710 "workload": "randrw", 00:13:20.710 "percentage": 50, 00:13:20.710 "status": "finished", 00:13:20.710 "queue_depth": 1, 00:13:20.710 "io_size": 131072, 00:13:20.710 "runtime": 1.341394, 00:13:20.710 "iops": 14466.294019505081, 00:13:20.710 "mibps": 1808.2867524381352, 00:13:20.710 "io_failed": 1, 00:13:20.710 "io_timeout": 0, 00:13:20.710 "avg_latency_us": 95.55125137995857, 00:13:20.710 "min_latency_us": 27.50043668122271, 00:13:20.710 "max_latency_us": 1745.7187772925763 00:13:20.710 } 00:13:20.710 ], 00:13:20.710 "core_count": 1 00:13:20.710 } 00:13:20.710 16:29:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.710 16:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 84252 00:13:20.710 16:29:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 84252 ']' 00:13:20.710 16:29:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 84252 00:13:20.710 16:29:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:13:20.710 16:29:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:20.710 16:29:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84252 00:13:20.968 killing process with pid 84252 00:13:20.968 16:29:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:20.968 16:29:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:20.968 16:29:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84252' 00:13:20.968 16:29:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 84252 00:13:20.968 [2024-12-06 16:29:02.560690] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:20.969 16:29:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 84252 00:13:20.969 [2024-12-06 16:29:02.597655] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:21.227 16:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ai8Bj9koyU 00:13:21.227 16:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:21.227 16:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:21.227 16:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:13:21.227 16:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:13:21.227 16:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:21.227 16:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:21.227 ************************************ 00:13:21.227 16:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:13:21.227 00:13:21.227 real 0m3.382s 00:13:21.227 user 0m4.318s 00:13:21.227 sys 0m0.550s 00:13:21.227 16:29:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:21.227 16:29:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.227 END TEST raid_write_error_test 00:13:21.227 ************************************ 00:13:21.227 16:29:02 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:21.227 16:29:02 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:13:21.227 16:29:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:21.227 16:29:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:21.227 16:29:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:21.227 ************************************ 00:13:21.227 START TEST raid_state_function_test 00:13:21.227 ************************************ 00:13:21.227 16:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:13:21.227 16:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:13:21.227 16:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:21.227 16:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:21.227 16:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:21.227 16:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:21.227 16:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:21.227 16:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:21.227 16:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:21.227 16:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:21.227 16:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:21.227 16:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:21.227 16:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:21.227 16:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:21.227 16:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:21.227 16:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:21.227 16:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:21.227 16:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:21.227 16:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:21.227 16:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:21.227 16:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:21.227 16:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:21.227 16:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:21.227 16:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:21.227 16:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:21.227 16:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:13:21.227 16:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:13:21.227 16:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:21.227 16:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:21.227 Process raid pid: 84379 00:13:21.227 16:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=84379 00:13:21.227 16:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:21.227 16:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84379' 00:13:21.227 16:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 84379 00:13:21.227 16:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 84379 ']' 00:13:21.227 16:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.227 16:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:21.227 16:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.227 16:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:21.227 16:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.227 [2024-12-06 16:29:02.986779] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:13:21.227 [2024-12-06 16:29:02.987004] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:21.486 [2024-12-06 16:29:03.145335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.486 [2024-12-06 16:29:03.175463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.486 [2024-12-06 16:29:03.219700] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:21.486 [2024-12-06 16:29:03.219828] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:22.053 16:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:22.053 16:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:13:22.053 16:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:22.053 16:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.053 16:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.053 [2024-12-06 16:29:03.854838] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:22.053 [2024-12-06 16:29:03.854965] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:22.053 [2024-12-06 16:29:03.854998] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:22.053 [2024-12-06 16:29:03.855022] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:22.053 [2024-12-06 16:29:03.855041] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:22.053 [2024-12-06 16:29:03.855064] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:22.053 [2024-12-06 16:29:03.855085] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:22.053 [2024-12-06 16:29:03.855120] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:22.053 16:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.053 16:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:22.053 16:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:22.053 16:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:22.053 16:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:22.053 16:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:22.053 16:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:22.053 16:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.053 16:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.053 16:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.053 16:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.053 16:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.053 16:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.053 16:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.053 16:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.053 16:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.311 16:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.311 "name": "Existed_Raid", 00:13:22.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.311 "strip_size_kb": 0, 00:13:22.311 "state": "configuring", 00:13:22.311 "raid_level": "raid1", 00:13:22.311 "superblock": false, 00:13:22.311 "num_base_bdevs": 4, 00:13:22.311 "num_base_bdevs_discovered": 0, 00:13:22.311 "num_base_bdevs_operational": 4, 00:13:22.311 "base_bdevs_list": [ 00:13:22.311 { 00:13:22.311 "name": "BaseBdev1", 00:13:22.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.311 "is_configured": false, 00:13:22.311 "data_offset": 0, 00:13:22.311 "data_size": 0 00:13:22.311 }, 00:13:22.311 { 00:13:22.311 "name": "BaseBdev2", 00:13:22.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.311 "is_configured": false, 00:13:22.311 "data_offset": 0, 00:13:22.311 "data_size": 0 00:13:22.311 }, 00:13:22.311 { 00:13:22.311 "name": "BaseBdev3", 00:13:22.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.311 "is_configured": false, 00:13:22.311 "data_offset": 0, 00:13:22.311 "data_size": 0 00:13:22.311 }, 00:13:22.311 { 00:13:22.311 "name": "BaseBdev4", 00:13:22.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.311 "is_configured": false, 00:13:22.311 "data_offset": 0, 00:13:22.311 "data_size": 0 00:13:22.311 } 00:13:22.311 ] 00:13:22.311 }' 00:13:22.311 16:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.311 16:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.569 16:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:22.569 16:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.569 16:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.569 [2024-12-06 16:29:04.318000] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:22.569 [2024-12-06 16:29:04.318106] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:13:22.569 16:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.570 16:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:22.570 16:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.570 16:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.570 [2024-12-06 16:29:04.329979] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:22.570 [2024-12-06 16:29:04.330068] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:22.570 [2024-12-06 16:29:04.330102] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:22.570 [2024-12-06 16:29:04.330126] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:22.570 [2024-12-06 16:29:04.330184] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:22.570 [2024-12-06 16:29:04.330225] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:22.570 [2024-12-06 16:29:04.330253] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:22.570 [2024-12-06 16:29:04.330285] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:22.570 16:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.570 16:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:22.570 16:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.570 16:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.570 [2024-12-06 16:29:04.351055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:22.570 BaseBdev1 00:13:22.570 16:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.570 16:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:22.570 16:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:22.570 16:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:22.570 16:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:22.570 16:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:22.570 16:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:22.570 16:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:22.570 16:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.570 16:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.570 16:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.570 16:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:22.570 16:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.570 16:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.570 [ 00:13:22.570 { 00:13:22.570 "name": "BaseBdev1", 00:13:22.570 "aliases": [ 00:13:22.570 "241a7a95-e3c5-409d-a7eb-d668f756acf2" 00:13:22.570 ], 00:13:22.570 "product_name": "Malloc disk", 00:13:22.570 "block_size": 512, 00:13:22.570 "num_blocks": 65536, 00:13:22.570 "uuid": "241a7a95-e3c5-409d-a7eb-d668f756acf2", 00:13:22.570 "assigned_rate_limits": { 00:13:22.570 "rw_ios_per_sec": 0, 00:13:22.570 "rw_mbytes_per_sec": 0, 00:13:22.570 "r_mbytes_per_sec": 0, 00:13:22.570 "w_mbytes_per_sec": 0 00:13:22.570 }, 00:13:22.570 "claimed": true, 00:13:22.570 "claim_type": "exclusive_write", 00:13:22.570 "zoned": false, 00:13:22.570 "supported_io_types": { 00:13:22.570 "read": true, 00:13:22.570 "write": true, 00:13:22.570 "unmap": true, 00:13:22.570 "flush": true, 00:13:22.570 "reset": true, 00:13:22.570 "nvme_admin": false, 00:13:22.570 "nvme_io": false, 00:13:22.570 "nvme_io_md": false, 00:13:22.570 "write_zeroes": true, 00:13:22.570 "zcopy": true, 00:13:22.570 "get_zone_info": false, 00:13:22.570 "zone_management": false, 00:13:22.570 "zone_append": false, 00:13:22.570 "compare": false, 00:13:22.570 "compare_and_write": false, 00:13:22.570 "abort": true, 00:13:22.570 "seek_hole": false, 00:13:22.570 "seek_data": false, 00:13:22.570 "copy": true, 00:13:22.570 "nvme_iov_md": false 00:13:22.570 }, 00:13:22.570 "memory_domains": [ 00:13:22.570 { 00:13:22.570 "dma_device_id": "system", 00:13:22.570 "dma_device_type": 1 00:13:22.570 }, 00:13:22.570 { 00:13:22.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.570 "dma_device_type": 2 00:13:22.570 } 00:13:22.570 ], 00:13:22.570 "driver_specific": {} 00:13:22.570 } 00:13:22.570 ] 00:13:22.570 16:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.570 16:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:22.570 16:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:22.570 16:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:22.570 16:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:22.570 16:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:22.570 16:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:22.570 16:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:22.570 16:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.570 16:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.570 16:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.570 16:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.570 16:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.570 16:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.570 16:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.570 16:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.829 16:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.829 16:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.829 "name": "Existed_Raid", 00:13:22.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.829 "strip_size_kb": 0, 00:13:22.829 "state": "configuring", 00:13:22.829 "raid_level": "raid1", 00:13:22.829 "superblock": false, 00:13:22.829 "num_base_bdevs": 4, 00:13:22.829 "num_base_bdevs_discovered": 1, 00:13:22.829 "num_base_bdevs_operational": 4, 00:13:22.829 "base_bdevs_list": [ 00:13:22.829 { 00:13:22.829 "name": "BaseBdev1", 00:13:22.829 "uuid": "241a7a95-e3c5-409d-a7eb-d668f756acf2", 00:13:22.829 "is_configured": true, 00:13:22.829 "data_offset": 0, 00:13:22.829 "data_size": 65536 00:13:22.829 }, 00:13:22.829 { 00:13:22.829 "name": "BaseBdev2", 00:13:22.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.829 "is_configured": false, 00:13:22.829 "data_offset": 0, 00:13:22.829 "data_size": 0 00:13:22.829 }, 00:13:22.829 { 00:13:22.829 "name": "BaseBdev3", 00:13:22.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.829 "is_configured": false, 00:13:22.829 "data_offset": 0, 00:13:22.829 "data_size": 0 00:13:22.829 }, 00:13:22.829 { 00:13:22.829 "name": "BaseBdev4", 00:13:22.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.829 "is_configured": false, 00:13:22.829 "data_offset": 0, 00:13:22.829 "data_size": 0 00:13:22.829 } 00:13:22.829 ] 00:13:22.829 }' 00:13:22.829 16:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.829 16:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.087 16:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:23.087 16:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.087 16:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.087 [2024-12-06 16:29:04.766411] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:23.087 [2024-12-06 16:29:04.766540] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:13:23.087 16:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.087 16:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:23.087 16:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.087 16:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.087 [2024-12-06 16:29:04.778410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:23.087 [2024-12-06 16:29:04.780459] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:23.087 [2024-12-06 16:29:04.780554] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:23.087 [2024-12-06 16:29:04.780571] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:23.087 [2024-12-06 16:29:04.780582] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:23.087 [2024-12-06 16:29:04.780590] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:23.087 [2024-12-06 16:29:04.780600] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:23.087 16:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.087 16:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:23.087 16:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:23.087 16:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:23.087 16:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:23.087 16:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:23.087 16:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:23.087 16:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:23.087 16:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:23.087 16:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.087 16:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.087 16:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.087 16:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.087 16:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.087 16:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.087 16:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.087 16:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.087 16:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.087 16:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.087 "name": "Existed_Raid", 00:13:23.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.087 "strip_size_kb": 0, 00:13:23.087 "state": "configuring", 00:13:23.087 "raid_level": "raid1", 00:13:23.087 "superblock": false, 00:13:23.087 "num_base_bdevs": 4, 00:13:23.087 "num_base_bdevs_discovered": 1, 00:13:23.087 "num_base_bdevs_operational": 4, 00:13:23.087 "base_bdevs_list": [ 00:13:23.087 { 00:13:23.087 "name": "BaseBdev1", 00:13:23.087 "uuid": "241a7a95-e3c5-409d-a7eb-d668f756acf2", 00:13:23.087 "is_configured": true, 00:13:23.087 "data_offset": 0, 00:13:23.087 "data_size": 65536 00:13:23.087 }, 00:13:23.087 { 00:13:23.087 "name": "BaseBdev2", 00:13:23.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.087 "is_configured": false, 00:13:23.087 "data_offset": 0, 00:13:23.087 "data_size": 0 00:13:23.087 }, 00:13:23.088 { 00:13:23.088 "name": "BaseBdev3", 00:13:23.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.088 "is_configured": false, 00:13:23.088 "data_offset": 0, 00:13:23.088 "data_size": 0 00:13:23.088 }, 00:13:23.088 { 00:13:23.088 "name": "BaseBdev4", 00:13:23.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.088 "is_configured": false, 00:13:23.088 "data_offset": 0, 00:13:23.088 "data_size": 0 00:13:23.088 } 00:13:23.088 ] 00:13:23.088 }' 00:13:23.088 16:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.088 16:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.678 16:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:23.678 16:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.678 16:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.678 [2024-12-06 16:29:05.252823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:23.678 BaseBdev2 00:13:23.678 16:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.678 16:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:23.678 16:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:23.678 16:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:23.678 16:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:23.678 16:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:23.678 16:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:23.678 16:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:23.678 16:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.678 16:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.678 16:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.678 16:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:23.678 16:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.678 16:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.678 [ 00:13:23.678 { 00:13:23.678 "name": "BaseBdev2", 00:13:23.678 "aliases": [ 00:13:23.678 "5f33ca89-40d6-4a6c-9e89-1c1525ffb358" 00:13:23.678 ], 00:13:23.678 "product_name": "Malloc disk", 00:13:23.678 "block_size": 512, 00:13:23.678 "num_blocks": 65536, 00:13:23.678 "uuid": "5f33ca89-40d6-4a6c-9e89-1c1525ffb358", 00:13:23.678 "assigned_rate_limits": { 00:13:23.678 "rw_ios_per_sec": 0, 00:13:23.678 "rw_mbytes_per_sec": 0, 00:13:23.678 "r_mbytes_per_sec": 0, 00:13:23.678 "w_mbytes_per_sec": 0 00:13:23.678 }, 00:13:23.678 "claimed": true, 00:13:23.678 "claim_type": "exclusive_write", 00:13:23.678 "zoned": false, 00:13:23.678 "supported_io_types": { 00:13:23.678 "read": true, 00:13:23.678 "write": true, 00:13:23.678 "unmap": true, 00:13:23.678 "flush": true, 00:13:23.678 "reset": true, 00:13:23.678 "nvme_admin": false, 00:13:23.678 "nvme_io": false, 00:13:23.678 "nvme_io_md": false, 00:13:23.678 "write_zeroes": true, 00:13:23.678 "zcopy": true, 00:13:23.678 "get_zone_info": false, 00:13:23.678 "zone_management": false, 00:13:23.678 "zone_append": false, 00:13:23.678 "compare": false, 00:13:23.678 "compare_and_write": false, 00:13:23.678 "abort": true, 00:13:23.678 "seek_hole": false, 00:13:23.678 "seek_data": false, 00:13:23.678 "copy": true, 00:13:23.678 "nvme_iov_md": false 00:13:23.678 }, 00:13:23.678 "memory_domains": [ 00:13:23.678 { 00:13:23.678 "dma_device_id": "system", 00:13:23.678 "dma_device_type": 1 00:13:23.678 }, 00:13:23.678 { 00:13:23.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.678 "dma_device_type": 2 00:13:23.678 } 00:13:23.678 ], 00:13:23.678 "driver_specific": {} 00:13:23.678 } 00:13:23.678 ] 00:13:23.678 16:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.678 16:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:23.678 16:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:23.678 16:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:23.678 16:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:23.678 16:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:23.678 16:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:23.678 16:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:23.678 16:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:23.678 16:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:23.678 16:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.678 16:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.678 16:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.678 16:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.678 16:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.678 16:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.678 16:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.678 16:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.678 16:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.678 16:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.678 "name": "Existed_Raid", 00:13:23.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.678 "strip_size_kb": 0, 00:13:23.678 "state": "configuring", 00:13:23.678 "raid_level": "raid1", 00:13:23.678 "superblock": false, 00:13:23.678 "num_base_bdevs": 4, 00:13:23.678 "num_base_bdevs_discovered": 2, 00:13:23.678 "num_base_bdevs_operational": 4, 00:13:23.678 "base_bdevs_list": [ 00:13:23.678 { 00:13:23.678 "name": "BaseBdev1", 00:13:23.678 "uuid": "241a7a95-e3c5-409d-a7eb-d668f756acf2", 00:13:23.678 "is_configured": true, 00:13:23.678 "data_offset": 0, 00:13:23.678 "data_size": 65536 00:13:23.678 }, 00:13:23.678 { 00:13:23.678 "name": "BaseBdev2", 00:13:23.678 "uuid": "5f33ca89-40d6-4a6c-9e89-1c1525ffb358", 00:13:23.678 "is_configured": true, 00:13:23.678 "data_offset": 0, 00:13:23.678 "data_size": 65536 00:13:23.678 }, 00:13:23.678 { 00:13:23.678 "name": "BaseBdev3", 00:13:23.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.678 "is_configured": false, 00:13:23.678 "data_offset": 0, 00:13:23.678 "data_size": 0 00:13:23.678 }, 00:13:23.678 { 00:13:23.678 "name": "BaseBdev4", 00:13:23.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.678 "is_configured": false, 00:13:23.678 "data_offset": 0, 00:13:23.678 "data_size": 0 00:13:23.678 } 00:13:23.678 ] 00:13:23.678 }' 00:13:23.678 16:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.678 16:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.936 16:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:23.936 16:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.936 16:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.936 [2024-12-06 16:29:05.738907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:23.936 BaseBdev3 00:13:23.936 16:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.936 16:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:23.936 16:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:23.936 16:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:23.936 16:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:23.936 16:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:23.936 16:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:23.936 16:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:23.936 16:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.936 16:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.936 16:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.936 16:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:23.936 16:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.936 16:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.936 [ 00:13:23.936 { 00:13:23.936 "name": "BaseBdev3", 00:13:23.936 "aliases": [ 00:13:23.936 "dceadd42-09a5-418e-8291-b480a236dc76" 00:13:23.936 ], 00:13:23.936 "product_name": "Malloc disk", 00:13:23.936 "block_size": 512, 00:13:23.936 "num_blocks": 65536, 00:13:23.936 "uuid": "dceadd42-09a5-418e-8291-b480a236dc76", 00:13:23.936 "assigned_rate_limits": { 00:13:23.936 "rw_ios_per_sec": 0, 00:13:23.936 "rw_mbytes_per_sec": 0, 00:13:23.936 "r_mbytes_per_sec": 0, 00:13:23.936 "w_mbytes_per_sec": 0 00:13:23.936 }, 00:13:23.936 "claimed": true, 00:13:23.936 "claim_type": "exclusive_write", 00:13:23.936 "zoned": false, 00:13:23.936 "supported_io_types": { 00:13:23.936 "read": true, 00:13:23.936 "write": true, 00:13:23.936 "unmap": true, 00:13:23.936 "flush": true, 00:13:23.936 "reset": true, 00:13:23.936 "nvme_admin": false, 00:13:24.194 "nvme_io": false, 00:13:24.194 "nvme_io_md": false, 00:13:24.194 "write_zeroes": true, 00:13:24.194 "zcopy": true, 00:13:24.194 "get_zone_info": false, 00:13:24.194 "zone_management": false, 00:13:24.194 "zone_append": false, 00:13:24.194 "compare": false, 00:13:24.194 "compare_and_write": false, 00:13:24.194 "abort": true, 00:13:24.194 "seek_hole": false, 00:13:24.194 "seek_data": false, 00:13:24.194 "copy": true, 00:13:24.194 "nvme_iov_md": false 00:13:24.194 }, 00:13:24.194 "memory_domains": [ 00:13:24.194 { 00:13:24.194 "dma_device_id": "system", 00:13:24.194 "dma_device_type": 1 00:13:24.194 }, 00:13:24.194 { 00:13:24.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.194 "dma_device_type": 2 00:13:24.194 } 00:13:24.194 ], 00:13:24.194 "driver_specific": {} 00:13:24.194 } 00:13:24.194 ] 00:13:24.194 16:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.194 16:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:24.195 16:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:24.195 16:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:24.195 16:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:24.195 16:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:24.195 16:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:24.195 16:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:24.195 16:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:24.195 16:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:24.195 16:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.195 16:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.195 16:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.195 16:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.195 16:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.195 16:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:24.195 16:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.195 16:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.195 16:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.195 16:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.195 "name": "Existed_Raid", 00:13:24.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.195 "strip_size_kb": 0, 00:13:24.195 "state": "configuring", 00:13:24.195 "raid_level": "raid1", 00:13:24.195 "superblock": false, 00:13:24.195 "num_base_bdevs": 4, 00:13:24.195 "num_base_bdevs_discovered": 3, 00:13:24.195 "num_base_bdevs_operational": 4, 00:13:24.195 "base_bdevs_list": [ 00:13:24.195 { 00:13:24.195 "name": "BaseBdev1", 00:13:24.195 "uuid": "241a7a95-e3c5-409d-a7eb-d668f756acf2", 00:13:24.195 "is_configured": true, 00:13:24.195 "data_offset": 0, 00:13:24.195 "data_size": 65536 00:13:24.195 }, 00:13:24.195 { 00:13:24.195 "name": "BaseBdev2", 00:13:24.195 "uuid": "5f33ca89-40d6-4a6c-9e89-1c1525ffb358", 00:13:24.195 "is_configured": true, 00:13:24.195 "data_offset": 0, 00:13:24.195 "data_size": 65536 00:13:24.195 }, 00:13:24.195 { 00:13:24.195 "name": "BaseBdev3", 00:13:24.195 "uuid": "dceadd42-09a5-418e-8291-b480a236dc76", 00:13:24.195 "is_configured": true, 00:13:24.195 "data_offset": 0, 00:13:24.195 "data_size": 65536 00:13:24.195 }, 00:13:24.195 { 00:13:24.195 "name": "BaseBdev4", 00:13:24.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.195 "is_configured": false, 00:13:24.195 "data_offset": 0, 00:13:24.195 "data_size": 0 00:13:24.195 } 00:13:24.195 ] 00:13:24.195 }' 00:13:24.195 16:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.195 16:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.453 16:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:24.453 16:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.453 16:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.453 [2024-12-06 16:29:06.285138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:24.453 [2024-12-06 16:29:06.285196] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:13:24.453 [2024-12-06 16:29:06.285205] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:24.453 [2024-12-06 16:29:06.285501] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:24.453 BaseBdev4 00:13:24.453 [2024-12-06 16:29:06.285670] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:13:24.453 [2024-12-06 16:29:06.285689] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:13:24.453 [2024-12-06 16:29:06.285899] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:24.453 16:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.453 16:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:24.453 16:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:24.453 16:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:24.453 16:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:24.453 16:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:24.453 16:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:24.453 16:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:24.453 16:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.453 16:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.712 16:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.712 16:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:24.712 16:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.712 16:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.712 [ 00:13:24.712 { 00:13:24.712 "name": "BaseBdev4", 00:13:24.712 "aliases": [ 00:13:24.712 "65b0ad32-5b94-49c0-90a9-8c1b88e8e381" 00:13:24.712 ], 00:13:24.712 "product_name": "Malloc disk", 00:13:24.712 "block_size": 512, 00:13:24.712 "num_blocks": 65536, 00:13:24.712 "uuid": "65b0ad32-5b94-49c0-90a9-8c1b88e8e381", 00:13:24.712 "assigned_rate_limits": { 00:13:24.712 "rw_ios_per_sec": 0, 00:13:24.712 "rw_mbytes_per_sec": 0, 00:13:24.712 "r_mbytes_per_sec": 0, 00:13:24.712 "w_mbytes_per_sec": 0 00:13:24.712 }, 00:13:24.712 "claimed": true, 00:13:24.712 "claim_type": "exclusive_write", 00:13:24.712 "zoned": false, 00:13:24.712 "supported_io_types": { 00:13:24.712 "read": true, 00:13:24.712 "write": true, 00:13:24.712 "unmap": true, 00:13:24.712 "flush": true, 00:13:24.712 "reset": true, 00:13:24.712 "nvme_admin": false, 00:13:24.712 "nvme_io": false, 00:13:24.712 "nvme_io_md": false, 00:13:24.712 "write_zeroes": true, 00:13:24.712 "zcopy": true, 00:13:24.712 "get_zone_info": false, 00:13:24.712 "zone_management": false, 00:13:24.712 "zone_append": false, 00:13:24.712 "compare": false, 00:13:24.712 "compare_and_write": false, 00:13:24.712 "abort": true, 00:13:24.712 "seek_hole": false, 00:13:24.712 "seek_data": false, 00:13:24.712 "copy": true, 00:13:24.712 "nvme_iov_md": false 00:13:24.712 }, 00:13:24.712 "memory_domains": [ 00:13:24.712 { 00:13:24.712 "dma_device_id": "system", 00:13:24.712 "dma_device_type": 1 00:13:24.712 }, 00:13:24.712 { 00:13:24.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.712 "dma_device_type": 2 00:13:24.712 } 00:13:24.712 ], 00:13:24.712 "driver_specific": {} 00:13:24.712 } 00:13:24.712 ] 00:13:24.712 16:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.712 16:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:24.712 16:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:24.712 16:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:24.712 16:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:24.712 16:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:24.712 16:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:24.712 16:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:24.712 16:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:24.712 16:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:24.712 16:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.712 16:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.712 16:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.712 16:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.712 16:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.712 16:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.712 16:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.712 16:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:24.712 16:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.712 16:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.712 "name": "Existed_Raid", 00:13:24.712 "uuid": "5254eba9-6b97-4138-bbc8-ae06419628c8", 00:13:24.712 "strip_size_kb": 0, 00:13:24.712 "state": "online", 00:13:24.712 "raid_level": "raid1", 00:13:24.712 "superblock": false, 00:13:24.712 "num_base_bdevs": 4, 00:13:24.712 "num_base_bdevs_discovered": 4, 00:13:24.712 "num_base_bdevs_operational": 4, 00:13:24.712 "base_bdevs_list": [ 00:13:24.712 { 00:13:24.712 "name": "BaseBdev1", 00:13:24.713 "uuid": "241a7a95-e3c5-409d-a7eb-d668f756acf2", 00:13:24.713 "is_configured": true, 00:13:24.713 "data_offset": 0, 00:13:24.713 "data_size": 65536 00:13:24.713 }, 00:13:24.713 { 00:13:24.713 "name": "BaseBdev2", 00:13:24.713 "uuid": "5f33ca89-40d6-4a6c-9e89-1c1525ffb358", 00:13:24.713 "is_configured": true, 00:13:24.713 "data_offset": 0, 00:13:24.713 "data_size": 65536 00:13:24.713 }, 00:13:24.713 { 00:13:24.713 "name": "BaseBdev3", 00:13:24.713 "uuid": "dceadd42-09a5-418e-8291-b480a236dc76", 00:13:24.713 "is_configured": true, 00:13:24.713 "data_offset": 0, 00:13:24.713 "data_size": 65536 00:13:24.713 }, 00:13:24.713 { 00:13:24.713 "name": "BaseBdev4", 00:13:24.713 "uuid": "65b0ad32-5b94-49c0-90a9-8c1b88e8e381", 00:13:24.713 "is_configured": true, 00:13:24.713 "data_offset": 0, 00:13:24.713 "data_size": 65536 00:13:24.713 } 00:13:24.713 ] 00:13:24.713 }' 00:13:24.713 16:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.713 16:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.971 16:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:24.971 16:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:24.971 16:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:24.971 16:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:24.971 16:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:24.971 16:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:24.971 16:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:24.971 16:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.971 16:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.971 16:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:24.971 [2024-12-06 16:29:06.756756] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:24.971 16:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.971 16:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:24.971 "name": "Existed_Raid", 00:13:24.971 "aliases": [ 00:13:24.971 "5254eba9-6b97-4138-bbc8-ae06419628c8" 00:13:24.971 ], 00:13:24.971 "product_name": "Raid Volume", 00:13:24.971 "block_size": 512, 00:13:24.971 "num_blocks": 65536, 00:13:24.971 "uuid": "5254eba9-6b97-4138-bbc8-ae06419628c8", 00:13:24.971 "assigned_rate_limits": { 00:13:24.971 "rw_ios_per_sec": 0, 00:13:24.971 "rw_mbytes_per_sec": 0, 00:13:24.971 "r_mbytes_per_sec": 0, 00:13:24.971 "w_mbytes_per_sec": 0 00:13:24.971 }, 00:13:24.971 "claimed": false, 00:13:24.971 "zoned": false, 00:13:24.971 "supported_io_types": { 00:13:24.971 "read": true, 00:13:24.971 "write": true, 00:13:24.971 "unmap": false, 00:13:24.971 "flush": false, 00:13:24.971 "reset": true, 00:13:24.971 "nvme_admin": false, 00:13:24.971 "nvme_io": false, 00:13:24.971 "nvme_io_md": false, 00:13:24.971 "write_zeroes": true, 00:13:24.971 "zcopy": false, 00:13:24.971 "get_zone_info": false, 00:13:24.971 "zone_management": false, 00:13:24.971 "zone_append": false, 00:13:24.971 "compare": false, 00:13:24.971 "compare_and_write": false, 00:13:24.971 "abort": false, 00:13:24.971 "seek_hole": false, 00:13:24.971 "seek_data": false, 00:13:24.971 "copy": false, 00:13:24.971 "nvme_iov_md": false 00:13:24.971 }, 00:13:24.971 "memory_domains": [ 00:13:24.971 { 00:13:24.971 "dma_device_id": "system", 00:13:24.971 "dma_device_type": 1 00:13:24.971 }, 00:13:24.971 { 00:13:24.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.971 "dma_device_type": 2 00:13:24.971 }, 00:13:24.971 { 00:13:24.971 "dma_device_id": "system", 00:13:24.971 "dma_device_type": 1 00:13:24.971 }, 00:13:24.971 { 00:13:24.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.971 "dma_device_type": 2 00:13:24.971 }, 00:13:24.971 { 00:13:24.971 "dma_device_id": "system", 00:13:24.971 "dma_device_type": 1 00:13:24.971 }, 00:13:24.971 { 00:13:24.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.971 "dma_device_type": 2 00:13:24.971 }, 00:13:24.971 { 00:13:24.971 "dma_device_id": "system", 00:13:24.971 "dma_device_type": 1 00:13:24.971 }, 00:13:24.971 { 00:13:24.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.971 "dma_device_type": 2 00:13:24.971 } 00:13:24.971 ], 00:13:24.971 "driver_specific": { 00:13:24.971 "raid": { 00:13:24.971 "uuid": "5254eba9-6b97-4138-bbc8-ae06419628c8", 00:13:24.971 "strip_size_kb": 0, 00:13:24.971 "state": "online", 00:13:24.971 "raid_level": "raid1", 00:13:24.971 "superblock": false, 00:13:24.971 "num_base_bdevs": 4, 00:13:24.971 "num_base_bdevs_discovered": 4, 00:13:24.971 "num_base_bdevs_operational": 4, 00:13:24.971 "base_bdevs_list": [ 00:13:24.971 { 00:13:24.971 "name": "BaseBdev1", 00:13:24.971 "uuid": "241a7a95-e3c5-409d-a7eb-d668f756acf2", 00:13:24.971 "is_configured": true, 00:13:24.971 "data_offset": 0, 00:13:24.971 "data_size": 65536 00:13:24.971 }, 00:13:24.971 { 00:13:24.971 "name": "BaseBdev2", 00:13:24.971 "uuid": "5f33ca89-40d6-4a6c-9e89-1c1525ffb358", 00:13:24.971 "is_configured": true, 00:13:24.971 "data_offset": 0, 00:13:24.971 "data_size": 65536 00:13:24.971 }, 00:13:24.971 { 00:13:24.971 "name": "BaseBdev3", 00:13:24.971 "uuid": "dceadd42-09a5-418e-8291-b480a236dc76", 00:13:24.971 "is_configured": true, 00:13:24.971 "data_offset": 0, 00:13:24.971 "data_size": 65536 00:13:24.971 }, 00:13:24.971 { 00:13:24.971 "name": "BaseBdev4", 00:13:24.971 "uuid": "65b0ad32-5b94-49c0-90a9-8c1b88e8e381", 00:13:24.971 "is_configured": true, 00:13:24.971 "data_offset": 0, 00:13:24.971 "data_size": 65536 00:13:24.971 } 00:13:24.971 ] 00:13:24.971 } 00:13:24.971 } 00:13:24.971 }' 00:13:24.971 16:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:25.230 16:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:25.230 BaseBdev2 00:13:25.230 BaseBdev3 00:13:25.230 BaseBdev4' 00:13:25.230 16:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:25.230 16:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:25.230 16:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:25.230 16:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:25.230 16:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.230 16:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.230 16:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:25.230 16:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.230 16:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:25.230 16:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:25.230 16:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:25.230 16:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:25.230 16:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.230 16:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:25.230 16:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.230 16:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.230 16:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:25.230 16:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:25.230 16:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:25.230 16:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:25.230 16:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:25.230 16:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.230 16:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.230 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.230 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:25.230 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:25.230 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:25.230 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:25.230 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:25.230 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.230 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.230 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.488 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:25.488 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:25.488 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:25.488 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.488 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.488 [2024-12-06 16:29:07.091987] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:25.488 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.488 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:25.488 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:25.488 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:25.488 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:25.488 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:25.488 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:25.488 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:25.488 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:25.488 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:25.488 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:25.488 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:25.488 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.488 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.488 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.488 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.488 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.488 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.488 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.489 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:25.489 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.489 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.489 "name": "Existed_Raid", 00:13:25.489 "uuid": "5254eba9-6b97-4138-bbc8-ae06419628c8", 00:13:25.489 "strip_size_kb": 0, 00:13:25.489 "state": "online", 00:13:25.489 "raid_level": "raid1", 00:13:25.489 "superblock": false, 00:13:25.489 "num_base_bdevs": 4, 00:13:25.489 "num_base_bdevs_discovered": 3, 00:13:25.489 "num_base_bdevs_operational": 3, 00:13:25.489 "base_bdevs_list": [ 00:13:25.489 { 00:13:25.489 "name": null, 00:13:25.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.489 "is_configured": false, 00:13:25.489 "data_offset": 0, 00:13:25.489 "data_size": 65536 00:13:25.489 }, 00:13:25.489 { 00:13:25.489 "name": "BaseBdev2", 00:13:25.489 "uuid": "5f33ca89-40d6-4a6c-9e89-1c1525ffb358", 00:13:25.489 "is_configured": true, 00:13:25.489 "data_offset": 0, 00:13:25.489 "data_size": 65536 00:13:25.489 }, 00:13:25.489 { 00:13:25.489 "name": "BaseBdev3", 00:13:25.489 "uuid": "dceadd42-09a5-418e-8291-b480a236dc76", 00:13:25.489 "is_configured": true, 00:13:25.489 "data_offset": 0, 00:13:25.489 "data_size": 65536 00:13:25.489 }, 00:13:25.489 { 00:13:25.489 "name": "BaseBdev4", 00:13:25.489 "uuid": "65b0ad32-5b94-49c0-90a9-8c1b88e8e381", 00:13:25.489 "is_configured": true, 00:13:25.489 "data_offset": 0, 00:13:25.489 "data_size": 65536 00:13:25.489 } 00:13:25.489 ] 00:13:25.489 }' 00:13:25.489 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.489 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.747 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:26.005 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:26.005 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.005 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:26.005 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.005 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.005 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.005 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:26.005 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:26.006 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:26.006 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.006 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.006 [2024-12-06 16:29:07.635419] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:26.006 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.006 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:26.006 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:26.006 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.006 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:26.006 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.006 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.006 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.006 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:26.006 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:26.006 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:26.006 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.006 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.006 [2024-12-06 16:29:07.702994] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:26.006 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.006 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:26.006 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:26.006 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.006 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.006 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.006 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:26.006 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.006 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:26.006 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:26.006 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:26.006 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.006 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.006 [2024-12-06 16:29:07.774525] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:26.006 [2024-12-06 16:29:07.774706] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:26.006 [2024-12-06 16:29:07.787351] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:26.006 [2024-12-06 16:29:07.787473] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:26.006 [2024-12-06 16:29:07.787523] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:13:26.006 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.006 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:26.006 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:26.006 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:26.006 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.006 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.006 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.006 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.006 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:26.006 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:26.006 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:26.006 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:26.006 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:26.006 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:26.006 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.006 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.265 BaseBdev2 00:13:26.265 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.265 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:26.265 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:26.265 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:26.265 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:26.265 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:26.265 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:26.265 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:26.265 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.266 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.266 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.266 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:26.266 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.266 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.266 [ 00:13:26.266 { 00:13:26.266 "name": "BaseBdev2", 00:13:26.266 "aliases": [ 00:13:26.266 "cebd9f96-eb4c-49e5-86b2-1dc1c2732c15" 00:13:26.266 ], 00:13:26.266 "product_name": "Malloc disk", 00:13:26.266 "block_size": 512, 00:13:26.266 "num_blocks": 65536, 00:13:26.266 "uuid": "cebd9f96-eb4c-49e5-86b2-1dc1c2732c15", 00:13:26.266 "assigned_rate_limits": { 00:13:26.266 "rw_ios_per_sec": 0, 00:13:26.266 "rw_mbytes_per_sec": 0, 00:13:26.266 "r_mbytes_per_sec": 0, 00:13:26.266 "w_mbytes_per_sec": 0 00:13:26.266 }, 00:13:26.266 "claimed": false, 00:13:26.266 "zoned": false, 00:13:26.266 "supported_io_types": { 00:13:26.266 "read": true, 00:13:26.266 "write": true, 00:13:26.266 "unmap": true, 00:13:26.266 "flush": true, 00:13:26.266 "reset": true, 00:13:26.266 "nvme_admin": false, 00:13:26.266 "nvme_io": false, 00:13:26.266 "nvme_io_md": false, 00:13:26.266 "write_zeroes": true, 00:13:26.266 "zcopy": true, 00:13:26.266 "get_zone_info": false, 00:13:26.266 "zone_management": false, 00:13:26.266 "zone_append": false, 00:13:26.266 "compare": false, 00:13:26.266 "compare_and_write": false, 00:13:26.266 "abort": true, 00:13:26.266 "seek_hole": false, 00:13:26.266 "seek_data": false, 00:13:26.266 "copy": true, 00:13:26.266 "nvme_iov_md": false 00:13:26.266 }, 00:13:26.266 "memory_domains": [ 00:13:26.266 { 00:13:26.266 "dma_device_id": "system", 00:13:26.266 "dma_device_type": 1 00:13:26.266 }, 00:13:26.266 { 00:13:26.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.266 "dma_device_type": 2 00:13:26.266 } 00:13:26.266 ], 00:13:26.266 "driver_specific": {} 00:13:26.266 } 00:13:26.266 ] 00:13:26.266 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.266 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:26.266 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:26.266 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:26.266 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:26.266 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.266 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.266 BaseBdev3 00:13:26.266 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.266 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:26.266 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:26.266 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:26.266 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:26.266 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:26.266 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:26.266 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:26.266 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.266 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.266 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.266 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:26.266 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.266 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.266 [ 00:13:26.266 { 00:13:26.266 "name": "BaseBdev3", 00:13:26.266 "aliases": [ 00:13:26.266 "06a8c1fd-1873-4d80-bafc-686265621e75" 00:13:26.266 ], 00:13:26.266 "product_name": "Malloc disk", 00:13:26.266 "block_size": 512, 00:13:26.266 "num_blocks": 65536, 00:13:26.266 "uuid": "06a8c1fd-1873-4d80-bafc-686265621e75", 00:13:26.266 "assigned_rate_limits": { 00:13:26.266 "rw_ios_per_sec": 0, 00:13:26.266 "rw_mbytes_per_sec": 0, 00:13:26.266 "r_mbytes_per_sec": 0, 00:13:26.266 "w_mbytes_per_sec": 0 00:13:26.266 }, 00:13:26.266 "claimed": false, 00:13:26.266 "zoned": false, 00:13:26.266 "supported_io_types": { 00:13:26.266 "read": true, 00:13:26.266 "write": true, 00:13:26.266 "unmap": true, 00:13:26.266 "flush": true, 00:13:26.266 "reset": true, 00:13:26.266 "nvme_admin": false, 00:13:26.266 "nvme_io": false, 00:13:26.266 "nvme_io_md": false, 00:13:26.266 "write_zeroes": true, 00:13:26.266 "zcopy": true, 00:13:26.266 "get_zone_info": false, 00:13:26.266 "zone_management": false, 00:13:26.266 "zone_append": false, 00:13:26.266 "compare": false, 00:13:26.266 "compare_and_write": false, 00:13:26.266 "abort": true, 00:13:26.266 "seek_hole": false, 00:13:26.266 "seek_data": false, 00:13:26.266 "copy": true, 00:13:26.266 "nvme_iov_md": false 00:13:26.266 }, 00:13:26.266 "memory_domains": [ 00:13:26.266 { 00:13:26.266 "dma_device_id": "system", 00:13:26.266 "dma_device_type": 1 00:13:26.266 }, 00:13:26.266 { 00:13:26.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.266 "dma_device_type": 2 00:13:26.266 } 00:13:26.266 ], 00:13:26.266 "driver_specific": {} 00:13:26.266 } 00:13:26.266 ] 00:13:26.266 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.266 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:26.266 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:26.266 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:26.266 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:26.266 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.266 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.266 BaseBdev4 00:13:26.266 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.266 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:26.266 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:26.266 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:26.266 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:26.266 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:26.266 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:26.266 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:26.266 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.266 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.266 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.266 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:26.266 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.266 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.266 [ 00:13:26.266 { 00:13:26.266 "name": "BaseBdev4", 00:13:26.266 "aliases": [ 00:13:26.266 "17017963-397d-4107-81ea-cb673286248f" 00:13:26.266 ], 00:13:26.266 "product_name": "Malloc disk", 00:13:26.266 "block_size": 512, 00:13:26.266 "num_blocks": 65536, 00:13:26.266 "uuid": "17017963-397d-4107-81ea-cb673286248f", 00:13:26.266 "assigned_rate_limits": { 00:13:26.266 "rw_ios_per_sec": 0, 00:13:26.266 "rw_mbytes_per_sec": 0, 00:13:26.266 "r_mbytes_per_sec": 0, 00:13:26.266 "w_mbytes_per_sec": 0 00:13:26.266 }, 00:13:26.266 "claimed": false, 00:13:26.266 "zoned": false, 00:13:26.266 "supported_io_types": { 00:13:26.266 "read": true, 00:13:26.266 "write": true, 00:13:26.266 "unmap": true, 00:13:26.266 "flush": true, 00:13:26.266 "reset": true, 00:13:26.266 "nvme_admin": false, 00:13:26.266 "nvme_io": false, 00:13:26.266 "nvme_io_md": false, 00:13:26.266 "write_zeroes": true, 00:13:26.266 "zcopy": true, 00:13:26.266 "get_zone_info": false, 00:13:26.266 "zone_management": false, 00:13:26.266 "zone_append": false, 00:13:26.266 "compare": false, 00:13:26.266 "compare_and_write": false, 00:13:26.266 "abort": true, 00:13:26.266 "seek_hole": false, 00:13:26.266 "seek_data": false, 00:13:26.266 "copy": true, 00:13:26.266 "nvme_iov_md": false 00:13:26.266 }, 00:13:26.266 "memory_domains": [ 00:13:26.266 { 00:13:26.266 "dma_device_id": "system", 00:13:26.266 "dma_device_type": 1 00:13:26.266 }, 00:13:26.266 { 00:13:26.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.266 "dma_device_type": 2 00:13:26.266 } 00:13:26.266 ], 00:13:26.266 "driver_specific": {} 00:13:26.266 } 00:13:26.266 ] 00:13:26.266 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.267 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:26.267 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:26.267 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:26.267 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:26.267 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.267 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.267 [2024-12-06 16:29:07.977409] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:26.267 [2024-12-06 16:29:07.977465] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:26.267 [2024-12-06 16:29:07.977487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:26.267 [2024-12-06 16:29:07.979475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:26.267 [2024-12-06 16:29:07.979533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:26.267 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.267 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:26.267 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:26.267 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:26.267 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:26.267 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:26.267 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:26.267 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.267 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.267 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.267 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.267 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.267 16:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.267 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.267 16:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.267 16:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.267 16:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.267 "name": "Existed_Raid", 00:13:26.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.267 "strip_size_kb": 0, 00:13:26.267 "state": "configuring", 00:13:26.267 "raid_level": "raid1", 00:13:26.267 "superblock": false, 00:13:26.267 "num_base_bdevs": 4, 00:13:26.267 "num_base_bdevs_discovered": 3, 00:13:26.267 "num_base_bdevs_operational": 4, 00:13:26.267 "base_bdevs_list": [ 00:13:26.267 { 00:13:26.267 "name": "BaseBdev1", 00:13:26.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.267 "is_configured": false, 00:13:26.267 "data_offset": 0, 00:13:26.267 "data_size": 0 00:13:26.267 }, 00:13:26.267 { 00:13:26.267 "name": "BaseBdev2", 00:13:26.267 "uuid": "cebd9f96-eb4c-49e5-86b2-1dc1c2732c15", 00:13:26.267 "is_configured": true, 00:13:26.267 "data_offset": 0, 00:13:26.267 "data_size": 65536 00:13:26.267 }, 00:13:26.267 { 00:13:26.267 "name": "BaseBdev3", 00:13:26.267 "uuid": "06a8c1fd-1873-4d80-bafc-686265621e75", 00:13:26.267 "is_configured": true, 00:13:26.267 "data_offset": 0, 00:13:26.267 "data_size": 65536 00:13:26.267 }, 00:13:26.267 { 00:13:26.267 "name": "BaseBdev4", 00:13:26.267 "uuid": "17017963-397d-4107-81ea-cb673286248f", 00:13:26.267 "is_configured": true, 00:13:26.267 "data_offset": 0, 00:13:26.267 "data_size": 65536 00:13:26.267 } 00:13:26.267 ] 00:13:26.267 }' 00:13:26.267 16:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.267 16:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.834 16:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:26.834 16:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.834 16:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.834 [2024-12-06 16:29:08.444641] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:26.834 16:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.834 16:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:26.834 16:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:26.834 16:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:26.834 16:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:26.834 16:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:26.834 16:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:26.834 16:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.834 16:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.834 16:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.834 16:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.834 16:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.834 16:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.834 16:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.834 16:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.834 16:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.834 16:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.834 "name": "Existed_Raid", 00:13:26.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.834 "strip_size_kb": 0, 00:13:26.834 "state": "configuring", 00:13:26.834 "raid_level": "raid1", 00:13:26.834 "superblock": false, 00:13:26.834 "num_base_bdevs": 4, 00:13:26.834 "num_base_bdevs_discovered": 2, 00:13:26.834 "num_base_bdevs_operational": 4, 00:13:26.834 "base_bdevs_list": [ 00:13:26.834 { 00:13:26.834 "name": "BaseBdev1", 00:13:26.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.834 "is_configured": false, 00:13:26.834 "data_offset": 0, 00:13:26.834 "data_size": 0 00:13:26.834 }, 00:13:26.834 { 00:13:26.834 "name": null, 00:13:26.834 "uuid": "cebd9f96-eb4c-49e5-86b2-1dc1c2732c15", 00:13:26.834 "is_configured": false, 00:13:26.834 "data_offset": 0, 00:13:26.834 "data_size": 65536 00:13:26.834 }, 00:13:26.834 { 00:13:26.834 "name": "BaseBdev3", 00:13:26.834 "uuid": "06a8c1fd-1873-4d80-bafc-686265621e75", 00:13:26.834 "is_configured": true, 00:13:26.834 "data_offset": 0, 00:13:26.834 "data_size": 65536 00:13:26.834 }, 00:13:26.834 { 00:13:26.834 "name": "BaseBdev4", 00:13:26.834 "uuid": "17017963-397d-4107-81ea-cb673286248f", 00:13:26.834 "is_configured": true, 00:13:26.834 "data_offset": 0, 00:13:26.834 "data_size": 65536 00:13:26.834 } 00:13:26.834 ] 00:13:26.834 }' 00:13:26.834 16:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.834 16:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.401 16:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:27.401 16:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.401 16:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.401 16:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.401 16:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.401 16:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:27.401 16:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:27.401 16:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.401 16:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.401 [2024-12-06 16:29:08.978813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:27.401 BaseBdev1 00:13:27.401 16:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.401 16:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:27.401 16:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:27.401 16:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:27.401 16:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:27.401 16:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:27.401 16:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:27.401 16:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:27.401 16:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.401 16:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.401 16:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.401 16:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:27.401 16:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.401 16:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.401 [ 00:13:27.401 { 00:13:27.401 "name": "BaseBdev1", 00:13:27.401 "aliases": [ 00:13:27.401 "416d3a75-0d31-4aae-8f95-e7fbb2640900" 00:13:27.401 ], 00:13:27.401 "product_name": "Malloc disk", 00:13:27.401 "block_size": 512, 00:13:27.401 "num_blocks": 65536, 00:13:27.401 "uuid": "416d3a75-0d31-4aae-8f95-e7fbb2640900", 00:13:27.401 "assigned_rate_limits": { 00:13:27.401 "rw_ios_per_sec": 0, 00:13:27.401 "rw_mbytes_per_sec": 0, 00:13:27.401 "r_mbytes_per_sec": 0, 00:13:27.401 "w_mbytes_per_sec": 0 00:13:27.401 }, 00:13:27.401 "claimed": true, 00:13:27.401 "claim_type": "exclusive_write", 00:13:27.401 "zoned": false, 00:13:27.401 "supported_io_types": { 00:13:27.401 "read": true, 00:13:27.401 "write": true, 00:13:27.401 "unmap": true, 00:13:27.401 "flush": true, 00:13:27.401 "reset": true, 00:13:27.401 "nvme_admin": false, 00:13:27.401 "nvme_io": false, 00:13:27.401 "nvme_io_md": false, 00:13:27.401 "write_zeroes": true, 00:13:27.401 "zcopy": true, 00:13:27.401 "get_zone_info": false, 00:13:27.401 "zone_management": false, 00:13:27.401 "zone_append": false, 00:13:27.401 "compare": false, 00:13:27.401 "compare_and_write": false, 00:13:27.401 "abort": true, 00:13:27.401 "seek_hole": false, 00:13:27.401 "seek_data": false, 00:13:27.401 "copy": true, 00:13:27.401 "nvme_iov_md": false 00:13:27.401 }, 00:13:27.401 "memory_domains": [ 00:13:27.401 { 00:13:27.401 "dma_device_id": "system", 00:13:27.401 "dma_device_type": 1 00:13:27.401 }, 00:13:27.401 { 00:13:27.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.401 "dma_device_type": 2 00:13:27.401 } 00:13:27.401 ], 00:13:27.401 "driver_specific": {} 00:13:27.401 } 00:13:27.401 ] 00:13:27.401 16:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.401 16:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:27.401 16:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:27.401 16:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:27.401 16:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:27.401 16:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:27.401 16:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:27.401 16:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:27.401 16:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.401 16:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.401 16:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.401 16:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.401 16:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.401 16:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.401 16:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.401 16:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.401 16:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.401 16:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.401 "name": "Existed_Raid", 00:13:27.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.401 "strip_size_kb": 0, 00:13:27.401 "state": "configuring", 00:13:27.401 "raid_level": "raid1", 00:13:27.401 "superblock": false, 00:13:27.401 "num_base_bdevs": 4, 00:13:27.401 "num_base_bdevs_discovered": 3, 00:13:27.401 "num_base_bdevs_operational": 4, 00:13:27.401 "base_bdevs_list": [ 00:13:27.401 { 00:13:27.401 "name": "BaseBdev1", 00:13:27.401 "uuid": "416d3a75-0d31-4aae-8f95-e7fbb2640900", 00:13:27.401 "is_configured": true, 00:13:27.401 "data_offset": 0, 00:13:27.401 "data_size": 65536 00:13:27.401 }, 00:13:27.401 { 00:13:27.401 "name": null, 00:13:27.401 "uuid": "cebd9f96-eb4c-49e5-86b2-1dc1c2732c15", 00:13:27.401 "is_configured": false, 00:13:27.401 "data_offset": 0, 00:13:27.401 "data_size": 65536 00:13:27.401 }, 00:13:27.401 { 00:13:27.401 "name": "BaseBdev3", 00:13:27.401 "uuid": "06a8c1fd-1873-4d80-bafc-686265621e75", 00:13:27.401 "is_configured": true, 00:13:27.401 "data_offset": 0, 00:13:27.401 "data_size": 65536 00:13:27.401 }, 00:13:27.401 { 00:13:27.401 "name": "BaseBdev4", 00:13:27.401 "uuid": "17017963-397d-4107-81ea-cb673286248f", 00:13:27.401 "is_configured": true, 00:13:27.401 "data_offset": 0, 00:13:27.401 "data_size": 65536 00:13:27.401 } 00:13:27.401 ] 00:13:27.401 }' 00:13:27.401 16:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.401 16:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.660 16:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.660 16:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:27.660 16:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.660 16:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.660 16:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.660 16:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:27.660 16:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:27.660 16:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.660 16:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.660 [2024-12-06 16:29:09.494062] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:27.919 16:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.919 16:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:27.919 16:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:27.919 16:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:27.919 16:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:27.919 16:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:27.919 16:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:27.919 16:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.919 16:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.919 16:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.919 16:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.919 16:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.919 16:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.919 16:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.919 16:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.919 16:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.919 16:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.919 "name": "Existed_Raid", 00:13:27.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.919 "strip_size_kb": 0, 00:13:27.919 "state": "configuring", 00:13:27.919 "raid_level": "raid1", 00:13:27.919 "superblock": false, 00:13:27.919 "num_base_bdevs": 4, 00:13:27.919 "num_base_bdevs_discovered": 2, 00:13:27.919 "num_base_bdevs_operational": 4, 00:13:27.919 "base_bdevs_list": [ 00:13:27.919 { 00:13:27.919 "name": "BaseBdev1", 00:13:27.919 "uuid": "416d3a75-0d31-4aae-8f95-e7fbb2640900", 00:13:27.919 "is_configured": true, 00:13:27.919 "data_offset": 0, 00:13:27.919 "data_size": 65536 00:13:27.919 }, 00:13:27.919 { 00:13:27.919 "name": null, 00:13:27.919 "uuid": "cebd9f96-eb4c-49e5-86b2-1dc1c2732c15", 00:13:27.919 "is_configured": false, 00:13:27.919 "data_offset": 0, 00:13:27.919 "data_size": 65536 00:13:27.919 }, 00:13:27.919 { 00:13:27.919 "name": null, 00:13:27.919 "uuid": "06a8c1fd-1873-4d80-bafc-686265621e75", 00:13:27.919 "is_configured": false, 00:13:27.919 "data_offset": 0, 00:13:27.919 "data_size": 65536 00:13:27.919 }, 00:13:27.919 { 00:13:27.919 "name": "BaseBdev4", 00:13:27.919 "uuid": "17017963-397d-4107-81ea-cb673286248f", 00:13:27.919 "is_configured": true, 00:13:27.920 "data_offset": 0, 00:13:27.920 "data_size": 65536 00:13:27.920 } 00:13:27.920 ] 00:13:27.920 }' 00:13:27.920 16:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.920 16:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.179 16:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.179 16:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.179 16:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.179 16:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:28.179 16:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.179 16:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:28.179 16:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:28.179 16:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.179 16:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.179 [2024-12-06 16:29:09.985279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:28.179 16:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.179 16:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:28.179 16:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:28.179 16:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:28.179 16:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:28.179 16:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:28.179 16:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:28.179 16:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.179 16:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.179 16:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.179 16:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.179 16:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.179 16:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.179 16:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.179 16:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.179 16:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.438 16:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.438 "name": "Existed_Raid", 00:13:28.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.438 "strip_size_kb": 0, 00:13:28.438 "state": "configuring", 00:13:28.438 "raid_level": "raid1", 00:13:28.438 "superblock": false, 00:13:28.438 "num_base_bdevs": 4, 00:13:28.438 "num_base_bdevs_discovered": 3, 00:13:28.438 "num_base_bdevs_operational": 4, 00:13:28.438 "base_bdevs_list": [ 00:13:28.438 { 00:13:28.438 "name": "BaseBdev1", 00:13:28.438 "uuid": "416d3a75-0d31-4aae-8f95-e7fbb2640900", 00:13:28.438 "is_configured": true, 00:13:28.438 "data_offset": 0, 00:13:28.438 "data_size": 65536 00:13:28.438 }, 00:13:28.438 { 00:13:28.438 "name": null, 00:13:28.438 "uuid": "cebd9f96-eb4c-49e5-86b2-1dc1c2732c15", 00:13:28.438 "is_configured": false, 00:13:28.438 "data_offset": 0, 00:13:28.438 "data_size": 65536 00:13:28.438 }, 00:13:28.438 { 00:13:28.438 "name": "BaseBdev3", 00:13:28.438 "uuid": "06a8c1fd-1873-4d80-bafc-686265621e75", 00:13:28.438 "is_configured": true, 00:13:28.438 "data_offset": 0, 00:13:28.438 "data_size": 65536 00:13:28.438 }, 00:13:28.438 { 00:13:28.438 "name": "BaseBdev4", 00:13:28.438 "uuid": "17017963-397d-4107-81ea-cb673286248f", 00:13:28.438 "is_configured": true, 00:13:28.438 "data_offset": 0, 00:13:28.438 "data_size": 65536 00:13:28.438 } 00:13:28.438 ] 00:13:28.438 }' 00:13:28.438 16:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.438 16:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.694 16:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.694 16:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.694 16:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.694 16:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:28.694 16:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.694 16:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:28.694 16:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:28.694 16:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.694 16:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.694 [2024-12-06 16:29:10.488466] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:28.694 16:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.694 16:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:28.694 16:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:28.694 16:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:28.694 16:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:28.694 16:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:28.694 16:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:28.694 16:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.694 16:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.694 16:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.694 16:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.694 16:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.694 16:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.694 16:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.694 16:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.694 16:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.950 16:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.950 "name": "Existed_Raid", 00:13:28.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.950 "strip_size_kb": 0, 00:13:28.950 "state": "configuring", 00:13:28.950 "raid_level": "raid1", 00:13:28.950 "superblock": false, 00:13:28.950 "num_base_bdevs": 4, 00:13:28.950 "num_base_bdevs_discovered": 2, 00:13:28.951 "num_base_bdevs_operational": 4, 00:13:28.951 "base_bdevs_list": [ 00:13:28.951 { 00:13:28.951 "name": null, 00:13:28.951 "uuid": "416d3a75-0d31-4aae-8f95-e7fbb2640900", 00:13:28.951 "is_configured": false, 00:13:28.951 "data_offset": 0, 00:13:28.951 "data_size": 65536 00:13:28.951 }, 00:13:28.951 { 00:13:28.951 "name": null, 00:13:28.951 "uuid": "cebd9f96-eb4c-49e5-86b2-1dc1c2732c15", 00:13:28.951 "is_configured": false, 00:13:28.951 "data_offset": 0, 00:13:28.951 "data_size": 65536 00:13:28.951 }, 00:13:28.951 { 00:13:28.951 "name": "BaseBdev3", 00:13:28.951 "uuid": "06a8c1fd-1873-4d80-bafc-686265621e75", 00:13:28.951 "is_configured": true, 00:13:28.951 "data_offset": 0, 00:13:28.951 "data_size": 65536 00:13:28.951 }, 00:13:28.951 { 00:13:28.951 "name": "BaseBdev4", 00:13:28.951 "uuid": "17017963-397d-4107-81ea-cb673286248f", 00:13:28.951 "is_configured": true, 00:13:28.951 "data_offset": 0, 00:13:28.951 "data_size": 65536 00:13:28.951 } 00:13:28.951 ] 00:13:28.951 }' 00:13:28.951 16:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.951 16:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.214 16:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.214 16:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:29.214 16:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.214 16:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.214 16:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.214 16:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:29.214 16:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:29.214 16:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.214 16:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.214 [2024-12-06 16:29:11.042782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:29.481 16:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.481 16:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:29.481 16:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:29.481 16:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:29.481 16:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:29.481 16:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:29.481 16:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:29.481 16:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.481 16:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.481 16:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.481 16:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.481 16:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.481 16:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.481 16:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.481 16:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.481 16:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.481 16:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.481 "name": "Existed_Raid", 00:13:29.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.481 "strip_size_kb": 0, 00:13:29.481 "state": "configuring", 00:13:29.481 "raid_level": "raid1", 00:13:29.481 "superblock": false, 00:13:29.481 "num_base_bdevs": 4, 00:13:29.481 "num_base_bdevs_discovered": 3, 00:13:29.481 "num_base_bdevs_operational": 4, 00:13:29.481 "base_bdevs_list": [ 00:13:29.481 { 00:13:29.481 "name": null, 00:13:29.481 "uuid": "416d3a75-0d31-4aae-8f95-e7fbb2640900", 00:13:29.481 "is_configured": false, 00:13:29.481 "data_offset": 0, 00:13:29.481 "data_size": 65536 00:13:29.481 }, 00:13:29.481 { 00:13:29.481 "name": "BaseBdev2", 00:13:29.481 "uuid": "cebd9f96-eb4c-49e5-86b2-1dc1c2732c15", 00:13:29.481 "is_configured": true, 00:13:29.481 "data_offset": 0, 00:13:29.481 "data_size": 65536 00:13:29.481 }, 00:13:29.481 { 00:13:29.481 "name": "BaseBdev3", 00:13:29.481 "uuid": "06a8c1fd-1873-4d80-bafc-686265621e75", 00:13:29.481 "is_configured": true, 00:13:29.481 "data_offset": 0, 00:13:29.481 "data_size": 65536 00:13:29.481 }, 00:13:29.481 { 00:13:29.481 "name": "BaseBdev4", 00:13:29.481 "uuid": "17017963-397d-4107-81ea-cb673286248f", 00:13:29.481 "is_configured": true, 00:13:29.481 "data_offset": 0, 00:13:29.481 "data_size": 65536 00:13:29.481 } 00:13:29.481 ] 00:13:29.481 }' 00:13:29.481 16:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.481 16:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.738 16:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.738 16:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:29.738 16:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.738 16:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.738 16:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.738 16:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:29.738 16:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.738 16:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.738 16:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.738 16:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:29.738 16:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.995 16:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 416d3a75-0d31-4aae-8f95-e7fbb2640900 00:13:29.995 16:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.995 16:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.995 [2024-12-06 16:29:11.601477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:29.995 [2024-12-06 16:29:11.601620] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:13:29.995 [2024-12-06 16:29:11.601638] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:29.995 [2024-12-06 16:29:11.601982] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:29.995 [2024-12-06 16:29:11.602131] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:13:29.995 [2024-12-06 16:29:11.602143] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:13:29.995 [2024-12-06 16:29:11.602385] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:29.995 NewBaseBdev 00:13:29.995 16:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.995 16:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:29.995 16:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:29.995 16:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:29.995 16:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:29.995 16:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:29.995 16:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:29.995 16:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:29.995 16:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.995 16:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.995 16:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.995 16:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:29.995 16:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.995 16:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.995 [ 00:13:29.995 { 00:13:29.995 "name": "NewBaseBdev", 00:13:29.995 "aliases": [ 00:13:29.995 "416d3a75-0d31-4aae-8f95-e7fbb2640900" 00:13:29.995 ], 00:13:29.995 "product_name": "Malloc disk", 00:13:29.995 "block_size": 512, 00:13:29.995 "num_blocks": 65536, 00:13:29.995 "uuid": "416d3a75-0d31-4aae-8f95-e7fbb2640900", 00:13:29.995 "assigned_rate_limits": { 00:13:29.995 "rw_ios_per_sec": 0, 00:13:29.995 "rw_mbytes_per_sec": 0, 00:13:29.995 "r_mbytes_per_sec": 0, 00:13:29.995 "w_mbytes_per_sec": 0 00:13:29.995 }, 00:13:29.995 "claimed": true, 00:13:29.996 "claim_type": "exclusive_write", 00:13:29.996 "zoned": false, 00:13:29.996 "supported_io_types": { 00:13:29.996 "read": true, 00:13:29.996 "write": true, 00:13:29.996 "unmap": true, 00:13:29.996 "flush": true, 00:13:29.996 "reset": true, 00:13:29.996 "nvme_admin": false, 00:13:29.996 "nvme_io": false, 00:13:29.996 "nvme_io_md": false, 00:13:29.996 "write_zeroes": true, 00:13:29.996 "zcopy": true, 00:13:29.996 "get_zone_info": false, 00:13:29.996 "zone_management": false, 00:13:29.996 "zone_append": false, 00:13:29.996 "compare": false, 00:13:29.996 "compare_and_write": false, 00:13:29.996 "abort": true, 00:13:29.996 "seek_hole": false, 00:13:29.996 "seek_data": false, 00:13:29.996 "copy": true, 00:13:29.996 "nvme_iov_md": false 00:13:29.996 }, 00:13:29.996 "memory_domains": [ 00:13:29.996 { 00:13:29.996 "dma_device_id": "system", 00:13:29.996 "dma_device_type": 1 00:13:29.996 }, 00:13:29.996 { 00:13:29.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.996 "dma_device_type": 2 00:13:29.996 } 00:13:29.996 ], 00:13:29.996 "driver_specific": {} 00:13:29.996 } 00:13:29.996 ] 00:13:29.996 16:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.996 16:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:29.996 16:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:29.996 16:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:29.996 16:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:29.996 16:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:29.996 16:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:29.996 16:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:29.996 16:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.996 16:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.996 16:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.996 16:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.996 16:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.996 16:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.996 16:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.996 16:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.996 16:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.996 16:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.996 "name": "Existed_Raid", 00:13:29.996 "uuid": "f5b9cbb1-944a-408b-bd29-21f6b9776c46", 00:13:29.996 "strip_size_kb": 0, 00:13:29.996 "state": "online", 00:13:29.996 "raid_level": "raid1", 00:13:29.996 "superblock": false, 00:13:29.996 "num_base_bdevs": 4, 00:13:29.996 "num_base_bdevs_discovered": 4, 00:13:29.996 "num_base_bdevs_operational": 4, 00:13:29.996 "base_bdevs_list": [ 00:13:29.996 { 00:13:29.996 "name": "NewBaseBdev", 00:13:29.996 "uuid": "416d3a75-0d31-4aae-8f95-e7fbb2640900", 00:13:29.996 "is_configured": true, 00:13:29.996 "data_offset": 0, 00:13:29.996 "data_size": 65536 00:13:29.996 }, 00:13:29.996 { 00:13:29.996 "name": "BaseBdev2", 00:13:29.996 "uuid": "cebd9f96-eb4c-49e5-86b2-1dc1c2732c15", 00:13:29.996 "is_configured": true, 00:13:29.996 "data_offset": 0, 00:13:29.996 "data_size": 65536 00:13:29.996 }, 00:13:29.996 { 00:13:29.996 "name": "BaseBdev3", 00:13:29.996 "uuid": "06a8c1fd-1873-4d80-bafc-686265621e75", 00:13:29.996 "is_configured": true, 00:13:29.996 "data_offset": 0, 00:13:29.996 "data_size": 65536 00:13:29.996 }, 00:13:29.996 { 00:13:29.996 "name": "BaseBdev4", 00:13:29.996 "uuid": "17017963-397d-4107-81ea-cb673286248f", 00:13:29.996 "is_configured": true, 00:13:29.996 "data_offset": 0, 00:13:29.996 "data_size": 65536 00:13:29.996 } 00:13:29.996 ] 00:13:29.996 }' 00:13:29.996 16:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.996 16:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.255 16:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:30.255 16:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:30.255 16:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:30.255 16:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:30.255 16:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:30.255 16:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:30.255 16:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:30.255 16:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:30.255 16:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.255 16:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.255 [2024-12-06 16:29:12.001272] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:30.255 16:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.255 16:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:30.255 "name": "Existed_Raid", 00:13:30.255 "aliases": [ 00:13:30.255 "f5b9cbb1-944a-408b-bd29-21f6b9776c46" 00:13:30.255 ], 00:13:30.255 "product_name": "Raid Volume", 00:13:30.255 "block_size": 512, 00:13:30.255 "num_blocks": 65536, 00:13:30.255 "uuid": "f5b9cbb1-944a-408b-bd29-21f6b9776c46", 00:13:30.255 "assigned_rate_limits": { 00:13:30.255 "rw_ios_per_sec": 0, 00:13:30.255 "rw_mbytes_per_sec": 0, 00:13:30.255 "r_mbytes_per_sec": 0, 00:13:30.255 "w_mbytes_per_sec": 0 00:13:30.255 }, 00:13:30.255 "claimed": false, 00:13:30.255 "zoned": false, 00:13:30.255 "supported_io_types": { 00:13:30.255 "read": true, 00:13:30.255 "write": true, 00:13:30.255 "unmap": false, 00:13:30.255 "flush": false, 00:13:30.255 "reset": true, 00:13:30.255 "nvme_admin": false, 00:13:30.255 "nvme_io": false, 00:13:30.255 "nvme_io_md": false, 00:13:30.255 "write_zeroes": true, 00:13:30.255 "zcopy": false, 00:13:30.255 "get_zone_info": false, 00:13:30.255 "zone_management": false, 00:13:30.255 "zone_append": false, 00:13:30.255 "compare": false, 00:13:30.255 "compare_and_write": false, 00:13:30.255 "abort": false, 00:13:30.255 "seek_hole": false, 00:13:30.255 "seek_data": false, 00:13:30.255 "copy": false, 00:13:30.255 "nvme_iov_md": false 00:13:30.255 }, 00:13:30.255 "memory_domains": [ 00:13:30.255 { 00:13:30.255 "dma_device_id": "system", 00:13:30.255 "dma_device_type": 1 00:13:30.255 }, 00:13:30.255 { 00:13:30.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.255 "dma_device_type": 2 00:13:30.255 }, 00:13:30.255 { 00:13:30.255 "dma_device_id": "system", 00:13:30.255 "dma_device_type": 1 00:13:30.255 }, 00:13:30.255 { 00:13:30.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.255 "dma_device_type": 2 00:13:30.255 }, 00:13:30.255 { 00:13:30.255 "dma_device_id": "system", 00:13:30.255 "dma_device_type": 1 00:13:30.255 }, 00:13:30.255 { 00:13:30.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.255 "dma_device_type": 2 00:13:30.255 }, 00:13:30.255 { 00:13:30.255 "dma_device_id": "system", 00:13:30.255 "dma_device_type": 1 00:13:30.255 }, 00:13:30.255 { 00:13:30.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.255 "dma_device_type": 2 00:13:30.255 } 00:13:30.255 ], 00:13:30.255 "driver_specific": { 00:13:30.255 "raid": { 00:13:30.255 "uuid": "f5b9cbb1-944a-408b-bd29-21f6b9776c46", 00:13:30.255 "strip_size_kb": 0, 00:13:30.255 "state": "online", 00:13:30.255 "raid_level": "raid1", 00:13:30.255 "superblock": false, 00:13:30.255 "num_base_bdevs": 4, 00:13:30.255 "num_base_bdevs_discovered": 4, 00:13:30.255 "num_base_bdevs_operational": 4, 00:13:30.255 "base_bdevs_list": [ 00:13:30.255 { 00:13:30.255 "name": "NewBaseBdev", 00:13:30.255 "uuid": "416d3a75-0d31-4aae-8f95-e7fbb2640900", 00:13:30.255 "is_configured": true, 00:13:30.255 "data_offset": 0, 00:13:30.255 "data_size": 65536 00:13:30.255 }, 00:13:30.255 { 00:13:30.255 "name": "BaseBdev2", 00:13:30.255 "uuid": "cebd9f96-eb4c-49e5-86b2-1dc1c2732c15", 00:13:30.255 "is_configured": true, 00:13:30.255 "data_offset": 0, 00:13:30.255 "data_size": 65536 00:13:30.255 }, 00:13:30.255 { 00:13:30.255 "name": "BaseBdev3", 00:13:30.255 "uuid": "06a8c1fd-1873-4d80-bafc-686265621e75", 00:13:30.255 "is_configured": true, 00:13:30.255 "data_offset": 0, 00:13:30.255 "data_size": 65536 00:13:30.255 }, 00:13:30.255 { 00:13:30.255 "name": "BaseBdev4", 00:13:30.255 "uuid": "17017963-397d-4107-81ea-cb673286248f", 00:13:30.255 "is_configured": true, 00:13:30.255 "data_offset": 0, 00:13:30.255 "data_size": 65536 00:13:30.255 } 00:13:30.255 ] 00:13:30.255 } 00:13:30.255 } 00:13:30.255 }' 00:13:30.255 16:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:30.575 16:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:30.575 BaseBdev2 00:13:30.575 BaseBdev3 00:13:30.575 BaseBdev4' 00:13:30.575 16:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.575 16:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:30.575 16:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:30.575 16:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:30.575 16:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.575 16:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.575 16:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.575 16:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.575 16:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:30.576 16:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:30.576 16:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:30.576 16:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:30.576 16:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.576 16:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.576 16:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.576 16:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.576 16:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:30.576 16:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:30.576 16:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:30.576 16:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:30.576 16:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.576 16:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.576 16:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.576 16:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.576 16:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:30.576 16:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:30.576 16:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:30.576 16:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:30.576 16:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.576 16:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.576 16:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.576 16:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.576 16:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:30.576 16:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:30.576 16:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:30.576 16:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.576 16:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.576 [2024-12-06 16:29:12.316361] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:30.576 [2024-12-06 16:29:12.316393] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:30.576 [2024-12-06 16:29:12.316494] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:30.576 [2024-12-06 16:29:12.316811] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:30.576 [2024-12-06 16:29:12.316830] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:13:30.576 16:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.576 16:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 84379 00:13:30.576 16:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 84379 ']' 00:13:30.576 16:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 84379 00:13:30.576 16:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:13:30.576 16:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:30.576 16:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84379 00:13:30.576 16:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:30.576 16:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:30.576 16:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84379' 00:13:30.576 killing process with pid 84379 00:13:30.576 16:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 84379 00:13:30.576 [2024-12-06 16:29:12.360325] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:30.576 16:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 84379 00:13:30.576 [2024-12-06 16:29:12.404361] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:30.833 16:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:30.833 ************************************ 00:13:30.833 END TEST raid_state_function_test 00:13:30.833 ************************************ 00:13:30.833 00:13:30.833 real 0m9.738s 00:13:30.833 user 0m16.682s 00:13:30.833 sys 0m2.023s 00:13:30.833 16:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:30.833 16:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.092 16:29:12 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:13:31.092 16:29:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:31.092 16:29:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:31.092 16:29:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:31.092 ************************************ 00:13:31.092 START TEST raid_state_function_test_sb 00:13:31.092 ************************************ 00:13:31.092 16:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:13:31.092 16:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:13:31.092 16:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:31.092 16:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:31.092 16:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:31.092 16:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:31.092 16:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:31.092 16:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:31.092 16:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:31.092 16:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:31.092 16:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:31.092 16:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:31.092 16:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:31.092 16:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:31.092 16:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:31.092 16:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:31.092 16:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:31.092 16:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:31.092 16:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:31.092 16:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:31.092 16:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:31.092 16:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:31.092 16:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:31.092 16:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:31.092 16:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:31.092 16:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:13:31.092 16:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:13:31.092 16:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:31.092 16:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:31.092 Process raid pid: 85034 00:13:31.092 16:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=85034 00:13:31.092 16:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:31.092 16:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85034' 00:13:31.092 16:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 85034 00:13:31.092 16:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85034 ']' 00:13:31.092 16:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.092 16:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:31.092 16:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.092 16:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:31.092 16:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.092 [2024-12-06 16:29:12.805268] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:13:31.092 [2024-12-06 16:29:12.805409] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:31.352 [2024-12-06 16:29:12.979941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.352 [2024-12-06 16:29:13.011473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.352 [2024-12-06 16:29:13.057195] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:31.352 [2024-12-06 16:29:13.057245] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:31.918 16:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:31.918 16:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:31.918 16:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:31.918 16:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.918 16:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.918 [2024-12-06 16:29:13.709031] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:31.918 [2024-12-06 16:29:13.709097] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:31.919 [2024-12-06 16:29:13.709108] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:31.919 [2024-12-06 16:29:13.709119] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:31.919 [2024-12-06 16:29:13.709126] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:31.919 [2024-12-06 16:29:13.709141] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:31.919 [2024-12-06 16:29:13.709150] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:31.919 [2024-12-06 16:29:13.709160] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:31.919 16:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.919 16:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:31.919 16:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:31.919 16:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:31.919 16:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:31.919 16:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:31.919 16:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:31.919 16:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.919 16:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.919 16:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.919 16:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.919 16:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:31.919 16:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.919 16:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.919 16:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.919 16:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.177 16:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.177 "name": "Existed_Raid", 00:13:32.177 "uuid": "e1fd9e67-78a6-431e-8d7c-6c20f041a5a9", 00:13:32.177 "strip_size_kb": 0, 00:13:32.177 "state": "configuring", 00:13:32.177 "raid_level": "raid1", 00:13:32.177 "superblock": true, 00:13:32.177 "num_base_bdevs": 4, 00:13:32.177 "num_base_bdevs_discovered": 0, 00:13:32.177 "num_base_bdevs_operational": 4, 00:13:32.177 "base_bdevs_list": [ 00:13:32.177 { 00:13:32.177 "name": "BaseBdev1", 00:13:32.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.177 "is_configured": false, 00:13:32.177 "data_offset": 0, 00:13:32.177 "data_size": 0 00:13:32.177 }, 00:13:32.177 { 00:13:32.177 "name": "BaseBdev2", 00:13:32.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.177 "is_configured": false, 00:13:32.177 "data_offset": 0, 00:13:32.177 "data_size": 0 00:13:32.177 }, 00:13:32.177 { 00:13:32.177 "name": "BaseBdev3", 00:13:32.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.177 "is_configured": false, 00:13:32.177 "data_offset": 0, 00:13:32.177 "data_size": 0 00:13:32.177 }, 00:13:32.177 { 00:13:32.177 "name": "BaseBdev4", 00:13:32.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.177 "is_configured": false, 00:13:32.177 "data_offset": 0, 00:13:32.177 "data_size": 0 00:13:32.177 } 00:13:32.177 ] 00:13:32.177 }' 00:13:32.177 16:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.177 16:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.435 16:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:32.435 16:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.435 16:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.435 [2024-12-06 16:29:14.120263] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:32.435 [2024-12-06 16:29:14.120362] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:13:32.435 16:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.435 16:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:32.435 16:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.435 16:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.435 [2024-12-06 16:29:14.128300] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:32.435 [2024-12-06 16:29:14.128414] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:32.435 [2024-12-06 16:29:14.128462] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:32.435 [2024-12-06 16:29:14.128492] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:32.435 [2024-12-06 16:29:14.128514] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:32.435 [2024-12-06 16:29:14.128538] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:32.435 [2024-12-06 16:29:14.128578] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:32.435 [2024-12-06 16:29:14.128663] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:32.435 16:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.435 16:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:32.435 16:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.435 16:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.435 [2024-12-06 16:29:14.145984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:32.435 BaseBdev1 00:13:32.435 16:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.435 16:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:32.435 16:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:32.435 16:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:32.435 16:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:32.435 16:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:32.435 16:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:32.435 16:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:32.435 16:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.435 16:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.436 16:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.436 16:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:32.436 16:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.436 16:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.436 [ 00:13:32.436 { 00:13:32.436 "name": "BaseBdev1", 00:13:32.436 "aliases": [ 00:13:32.436 "2f913010-c618-46cf-854c-ba949d53941e" 00:13:32.436 ], 00:13:32.436 "product_name": "Malloc disk", 00:13:32.436 "block_size": 512, 00:13:32.436 "num_blocks": 65536, 00:13:32.436 "uuid": "2f913010-c618-46cf-854c-ba949d53941e", 00:13:32.436 "assigned_rate_limits": { 00:13:32.436 "rw_ios_per_sec": 0, 00:13:32.436 "rw_mbytes_per_sec": 0, 00:13:32.436 "r_mbytes_per_sec": 0, 00:13:32.436 "w_mbytes_per_sec": 0 00:13:32.436 }, 00:13:32.436 "claimed": true, 00:13:32.436 "claim_type": "exclusive_write", 00:13:32.436 "zoned": false, 00:13:32.436 "supported_io_types": { 00:13:32.436 "read": true, 00:13:32.436 "write": true, 00:13:32.436 "unmap": true, 00:13:32.436 "flush": true, 00:13:32.436 "reset": true, 00:13:32.436 "nvme_admin": false, 00:13:32.436 "nvme_io": false, 00:13:32.436 "nvme_io_md": false, 00:13:32.436 "write_zeroes": true, 00:13:32.436 "zcopy": true, 00:13:32.436 "get_zone_info": false, 00:13:32.436 "zone_management": false, 00:13:32.436 "zone_append": false, 00:13:32.436 "compare": false, 00:13:32.436 "compare_and_write": false, 00:13:32.436 "abort": true, 00:13:32.436 "seek_hole": false, 00:13:32.436 "seek_data": false, 00:13:32.436 "copy": true, 00:13:32.436 "nvme_iov_md": false 00:13:32.436 }, 00:13:32.436 "memory_domains": [ 00:13:32.436 { 00:13:32.436 "dma_device_id": "system", 00:13:32.436 "dma_device_type": 1 00:13:32.436 }, 00:13:32.436 { 00:13:32.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:32.436 "dma_device_type": 2 00:13:32.436 } 00:13:32.436 ], 00:13:32.436 "driver_specific": {} 00:13:32.436 } 00:13:32.436 ] 00:13:32.436 16:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.436 16:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:32.436 16:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:32.436 16:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:32.436 16:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:32.436 16:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:32.436 16:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:32.436 16:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:32.436 16:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.436 16:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.436 16:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.436 16:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.436 16:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.436 16:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:32.436 16:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.436 16:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.436 16:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.436 16:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.436 "name": "Existed_Raid", 00:13:32.436 "uuid": "0f28fc0b-ae50-4ea5-8ea0-27187bea1cf5", 00:13:32.436 "strip_size_kb": 0, 00:13:32.436 "state": "configuring", 00:13:32.436 "raid_level": "raid1", 00:13:32.436 "superblock": true, 00:13:32.436 "num_base_bdevs": 4, 00:13:32.436 "num_base_bdevs_discovered": 1, 00:13:32.436 "num_base_bdevs_operational": 4, 00:13:32.436 "base_bdevs_list": [ 00:13:32.436 { 00:13:32.436 "name": "BaseBdev1", 00:13:32.436 "uuid": "2f913010-c618-46cf-854c-ba949d53941e", 00:13:32.436 "is_configured": true, 00:13:32.436 "data_offset": 2048, 00:13:32.436 "data_size": 63488 00:13:32.436 }, 00:13:32.436 { 00:13:32.436 "name": "BaseBdev2", 00:13:32.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.436 "is_configured": false, 00:13:32.436 "data_offset": 0, 00:13:32.436 "data_size": 0 00:13:32.436 }, 00:13:32.436 { 00:13:32.436 "name": "BaseBdev3", 00:13:32.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.436 "is_configured": false, 00:13:32.436 "data_offset": 0, 00:13:32.436 "data_size": 0 00:13:32.436 }, 00:13:32.436 { 00:13:32.436 "name": "BaseBdev4", 00:13:32.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.436 "is_configured": false, 00:13:32.436 "data_offset": 0, 00:13:32.436 "data_size": 0 00:13:32.436 } 00:13:32.436 ] 00:13:32.436 }' 00:13:32.436 16:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.436 16:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.002 16:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:33.002 16:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.002 16:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.002 [2024-12-06 16:29:14.601273] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:33.002 [2024-12-06 16:29:14.601386] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:13:33.002 16:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.002 16:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:33.002 16:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.002 16:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.002 [2024-12-06 16:29:14.609287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:33.002 [2024-12-06 16:29:14.611331] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:33.002 [2024-12-06 16:29:14.611425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:33.002 [2024-12-06 16:29:14.611458] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:33.002 [2024-12-06 16:29:14.611484] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:33.002 [2024-12-06 16:29:14.611506] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:33.002 [2024-12-06 16:29:14.611537] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:33.002 16:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.002 16:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:33.002 16:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:33.002 16:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:33.002 16:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:33.002 16:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:33.002 16:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:33.002 16:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:33.002 16:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:33.002 16:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.002 16:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.002 16:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.002 16:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.002 16:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.002 16:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.002 16:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.002 16:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:33.002 16:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.002 16:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.002 "name": "Existed_Raid", 00:13:33.002 "uuid": "91492ba9-d4d0-4883-91d3-744e8e603e28", 00:13:33.002 "strip_size_kb": 0, 00:13:33.002 "state": "configuring", 00:13:33.002 "raid_level": "raid1", 00:13:33.002 "superblock": true, 00:13:33.002 "num_base_bdevs": 4, 00:13:33.002 "num_base_bdevs_discovered": 1, 00:13:33.002 "num_base_bdevs_operational": 4, 00:13:33.002 "base_bdevs_list": [ 00:13:33.002 { 00:13:33.002 "name": "BaseBdev1", 00:13:33.002 "uuid": "2f913010-c618-46cf-854c-ba949d53941e", 00:13:33.002 "is_configured": true, 00:13:33.002 "data_offset": 2048, 00:13:33.002 "data_size": 63488 00:13:33.002 }, 00:13:33.002 { 00:13:33.002 "name": "BaseBdev2", 00:13:33.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.002 "is_configured": false, 00:13:33.002 "data_offset": 0, 00:13:33.002 "data_size": 0 00:13:33.002 }, 00:13:33.002 { 00:13:33.003 "name": "BaseBdev3", 00:13:33.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.003 "is_configured": false, 00:13:33.003 "data_offset": 0, 00:13:33.003 "data_size": 0 00:13:33.003 }, 00:13:33.003 { 00:13:33.003 "name": "BaseBdev4", 00:13:33.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.003 "is_configured": false, 00:13:33.003 "data_offset": 0, 00:13:33.003 "data_size": 0 00:13:33.003 } 00:13:33.003 ] 00:13:33.003 }' 00:13:33.003 16:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.003 16:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.260 16:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:33.260 16:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.260 16:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.518 [2024-12-06 16:29:15.104113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:33.518 BaseBdev2 00:13:33.518 16:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.518 16:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:33.518 16:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:33.518 16:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:33.518 16:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:33.518 16:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:33.518 16:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:33.518 16:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:33.518 16:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.518 16:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.518 16:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.518 16:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:33.518 16:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.518 16:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.518 [ 00:13:33.518 { 00:13:33.518 "name": "BaseBdev2", 00:13:33.518 "aliases": [ 00:13:33.518 "3ec19481-9704-4d68-b0ad-86e9de1e927f" 00:13:33.518 ], 00:13:33.518 "product_name": "Malloc disk", 00:13:33.518 "block_size": 512, 00:13:33.518 "num_blocks": 65536, 00:13:33.518 "uuid": "3ec19481-9704-4d68-b0ad-86e9de1e927f", 00:13:33.518 "assigned_rate_limits": { 00:13:33.518 "rw_ios_per_sec": 0, 00:13:33.518 "rw_mbytes_per_sec": 0, 00:13:33.518 "r_mbytes_per_sec": 0, 00:13:33.518 "w_mbytes_per_sec": 0 00:13:33.518 }, 00:13:33.518 "claimed": true, 00:13:33.518 "claim_type": "exclusive_write", 00:13:33.518 "zoned": false, 00:13:33.518 "supported_io_types": { 00:13:33.518 "read": true, 00:13:33.518 "write": true, 00:13:33.518 "unmap": true, 00:13:33.518 "flush": true, 00:13:33.518 "reset": true, 00:13:33.518 "nvme_admin": false, 00:13:33.518 "nvme_io": false, 00:13:33.518 "nvme_io_md": false, 00:13:33.518 "write_zeroes": true, 00:13:33.518 "zcopy": true, 00:13:33.518 "get_zone_info": false, 00:13:33.518 "zone_management": false, 00:13:33.518 "zone_append": false, 00:13:33.518 "compare": false, 00:13:33.518 "compare_and_write": false, 00:13:33.518 "abort": true, 00:13:33.518 "seek_hole": false, 00:13:33.518 "seek_data": false, 00:13:33.518 "copy": true, 00:13:33.518 "nvme_iov_md": false 00:13:33.518 }, 00:13:33.518 "memory_domains": [ 00:13:33.518 { 00:13:33.518 "dma_device_id": "system", 00:13:33.518 "dma_device_type": 1 00:13:33.518 }, 00:13:33.518 { 00:13:33.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.518 "dma_device_type": 2 00:13:33.518 } 00:13:33.518 ], 00:13:33.518 "driver_specific": {} 00:13:33.518 } 00:13:33.518 ] 00:13:33.518 16:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.518 16:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:33.518 16:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:33.518 16:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:33.518 16:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:33.518 16:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:33.518 16:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:33.518 16:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:33.518 16:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:33.518 16:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:33.518 16:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.518 16:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.518 16:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.518 16:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.518 16:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.518 16:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:33.518 16:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.518 16:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.518 16:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.518 16:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.518 "name": "Existed_Raid", 00:13:33.518 "uuid": "91492ba9-d4d0-4883-91d3-744e8e603e28", 00:13:33.518 "strip_size_kb": 0, 00:13:33.518 "state": "configuring", 00:13:33.518 "raid_level": "raid1", 00:13:33.518 "superblock": true, 00:13:33.518 "num_base_bdevs": 4, 00:13:33.518 "num_base_bdevs_discovered": 2, 00:13:33.518 "num_base_bdevs_operational": 4, 00:13:33.518 "base_bdevs_list": [ 00:13:33.518 { 00:13:33.518 "name": "BaseBdev1", 00:13:33.518 "uuid": "2f913010-c618-46cf-854c-ba949d53941e", 00:13:33.518 "is_configured": true, 00:13:33.518 "data_offset": 2048, 00:13:33.518 "data_size": 63488 00:13:33.518 }, 00:13:33.518 { 00:13:33.518 "name": "BaseBdev2", 00:13:33.518 "uuid": "3ec19481-9704-4d68-b0ad-86e9de1e927f", 00:13:33.518 "is_configured": true, 00:13:33.518 "data_offset": 2048, 00:13:33.518 "data_size": 63488 00:13:33.518 }, 00:13:33.518 { 00:13:33.518 "name": "BaseBdev3", 00:13:33.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.518 "is_configured": false, 00:13:33.518 "data_offset": 0, 00:13:33.518 "data_size": 0 00:13:33.518 }, 00:13:33.518 { 00:13:33.518 "name": "BaseBdev4", 00:13:33.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.518 "is_configured": false, 00:13:33.518 "data_offset": 0, 00:13:33.518 "data_size": 0 00:13:33.518 } 00:13:33.518 ] 00:13:33.518 }' 00:13:33.518 16:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.518 16:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.083 16:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:34.083 16:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.083 16:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.083 [2024-12-06 16:29:15.629267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:34.083 BaseBdev3 00:13:34.083 16:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.083 16:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:34.083 16:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:34.083 16:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:34.083 16:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:34.083 16:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:34.083 16:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:34.083 16:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:34.083 16:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.083 16:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.083 16:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.083 16:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:34.083 16:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.083 16:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.083 [ 00:13:34.083 { 00:13:34.083 "name": "BaseBdev3", 00:13:34.083 "aliases": [ 00:13:34.083 "ed0982be-2b39-49d0-9e1d-1a1fd57f4e11" 00:13:34.083 ], 00:13:34.083 "product_name": "Malloc disk", 00:13:34.083 "block_size": 512, 00:13:34.083 "num_blocks": 65536, 00:13:34.083 "uuid": "ed0982be-2b39-49d0-9e1d-1a1fd57f4e11", 00:13:34.083 "assigned_rate_limits": { 00:13:34.083 "rw_ios_per_sec": 0, 00:13:34.083 "rw_mbytes_per_sec": 0, 00:13:34.083 "r_mbytes_per_sec": 0, 00:13:34.083 "w_mbytes_per_sec": 0 00:13:34.083 }, 00:13:34.083 "claimed": true, 00:13:34.083 "claim_type": "exclusive_write", 00:13:34.083 "zoned": false, 00:13:34.083 "supported_io_types": { 00:13:34.083 "read": true, 00:13:34.083 "write": true, 00:13:34.083 "unmap": true, 00:13:34.083 "flush": true, 00:13:34.083 "reset": true, 00:13:34.083 "nvme_admin": false, 00:13:34.083 "nvme_io": false, 00:13:34.083 "nvme_io_md": false, 00:13:34.083 "write_zeroes": true, 00:13:34.083 "zcopy": true, 00:13:34.083 "get_zone_info": false, 00:13:34.083 "zone_management": false, 00:13:34.083 "zone_append": false, 00:13:34.083 "compare": false, 00:13:34.083 "compare_and_write": false, 00:13:34.083 "abort": true, 00:13:34.083 "seek_hole": false, 00:13:34.083 "seek_data": false, 00:13:34.083 "copy": true, 00:13:34.083 "nvme_iov_md": false 00:13:34.083 }, 00:13:34.083 "memory_domains": [ 00:13:34.083 { 00:13:34.083 "dma_device_id": "system", 00:13:34.083 "dma_device_type": 1 00:13:34.083 }, 00:13:34.083 { 00:13:34.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.083 "dma_device_type": 2 00:13:34.083 } 00:13:34.083 ], 00:13:34.083 "driver_specific": {} 00:13:34.083 } 00:13:34.083 ] 00:13:34.083 16:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.083 16:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:34.084 16:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:34.084 16:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:34.084 16:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:34.084 16:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:34.084 16:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:34.084 16:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:34.084 16:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:34.084 16:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:34.084 16:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.084 16:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.084 16:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.084 16:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.084 16:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.084 16:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.084 16:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.084 16:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.084 16:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.084 16:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.084 "name": "Existed_Raid", 00:13:34.084 "uuid": "91492ba9-d4d0-4883-91d3-744e8e603e28", 00:13:34.084 "strip_size_kb": 0, 00:13:34.084 "state": "configuring", 00:13:34.084 "raid_level": "raid1", 00:13:34.084 "superblock": true, 00:13:34.084 "num_base_bdevs": 4, 00:13:34.084 "num_base_bdevs_discovered": 3, 00:13:34.084 "num_base_bdevs_operational": 4, 00:13:34.084 "base_bdevs_list": [ 00:13:34.084 { 00:13:34.084 "name": "BaseBdev1", 00:13:34.084 "uuid": "2f913010-c618-46cf-854c-ba949d53941e", 00:13:34.084 "is_configured": true, 00:13:34.084 "data_offset": 2048, 00:13:34.084 "data_size": 63488 00:13:34.084 }, 00:13:34.084 { 00:13:34.084 "name": "BaseBdev2", 00:13:34.084 "uuid": "3ec19481-9704-4d68-b0ad-86e9de1e927f", 00:13:34.084 "is_configured": true, 00:13:34.084 "data_offset": 2048, 00:13:34.084 "data_size": 63488 00:13:34.084 }, 00:13:34.084 { 00:13:34.084 "name": "BaseBdev3", 00:13:34.084 "uuid": "ed0982be-2b39-49d0-9e1d-1a1fd57f4e11", 00:13:34.084 "is_configured": true, 00:13:34.084 "data_offset": 2048, 00:13:34.084 "data_size": 63488 00:13:34.084 }, 00:13:34.084 { 00:13:34.084 "name": "BaseBdev4", 00:13:34.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.084 "is_configured": false, 00:13:34.084 "data_offset": 0, 00:13:34.084 "data_size": 0 00:13:34.084 } 00:13:34.084 ] 00:13:34.084 }' 00:13:34.084 16:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.084 16:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.343 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:34.343 16:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.343 16:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.343 [2024-12-06 16:29:16.147738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:34.343 [2024-12-06 16:29:16.148070] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:13:34.343 [2024-12-06 16:29:16.148132] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:34.343 BaseBdev4 00:13:34.343 [2024-12-06 16:29:16.148476] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:34.343 [2024-12-06 16:29:16.148703] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:13:34.343 [2024-12-06 16:29:16.148758] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:13:34.343 [2024-12-06 16:29:16.148968] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:34.343 16:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.343 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:34.343 16:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:34.343 16:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:34.343 16:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:34.343 16:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:34.343 16:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:34.343 16:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:34.343 16:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.343 16:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.343 16:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.343 16:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:34.343 16:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.343 16:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.343 [ 00:13:34.343 { 00:13:34.343 "name": "BaseBdev4", 00:13:34.343 "aliases": [ 00:13:34.343 "dda2c2ec-dc34-40e2-a9ed-4220a3f84bc6" 00:13:34.343 ], 00:13:34.343 "product_name": "Malloc disk", 00:13:34.343 "block_size": 512, 00:13:34.343 "num_blocks": 65536, 00:13:34.343 "uuid": "dda2c2ec-dc34-40e2-a9ed-4220a3f84bc6", 00:13:34.343 "assigned_rate_limits": { 00:13:34.343 "rw_ios_per_sec": 0, 00:13:34.343 "rw_mbytes_per_sec": 0, 00:13:34.343 "r_mbytes_per_sec": 0, 00:13:34.343 "w_mbytes_per_sec": 0 00:13:34.343 }, 00:13:34.343 "claimed": true, 00:13:34.343 "claim_type": "exclusive_write", 00:13:34.343 "zoned": false, 00:13:34.343 "supported_io_types": { 00:13:34.343 "read": true, 00:13:34.343 "write": true, 00:13:34.343 "unmap": true, 00:13:34.343 "flush": true, 00:13:34.343 "reset": true, 00:13:34.343 "nvme_admin": false, 00:13:34.343 "nvme_io": false, 00:13:34.343 "nvme_io_md": false, 00:13:34.343 "write_zeroes": true, 00:13:34.343 "zcopy": true, 00:13:34.343 "get_zone_info": false, 00:13:34.343 "zone_management": false, 00:13:34.343 "zone_append": false, 00:13:34.343 "compare": false, 00:13:34.343 "compare_and_write": false, 00:13:34.343 "abort": true, 00:13:34.343 "seek_hole": false, 00:13:34.343 "seek_data": false, 00:13:34.343 "copy": true, 00:13:34.343 "nvme_iov_md": false 00:13:34.343 }, 00:13:34.343 "memory_domains": [ 00:13:34.343 { 00:13:34.343 "dma_device_id": "system", 00:13:34.343 "dma_device_type": 1 00:13:34.343 }, 00:13:34.343 { 00:13:34.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.343 "dma_device_type": 2 00:13:34.343 } 00:13:34.343 ], 00:13:34.602 "driver_specific": {} 00:13:34.602 } 00:13:34.602 ] 00:13:34.602 16:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.602 16:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:34.602 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:34.602 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:34.602 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:34.602 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:34.602 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:34.602 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:34.602 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:34.602 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:34.602 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.602 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.602 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.602 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.602 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.602 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.602 16:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.602 16:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.602 16:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.602 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.602 "name": "Existed_Raid", 00:13:34.602 "uuid": "91492ba9-d4d0-4883-91d3-744e8e603e28", 00:13:34.602 "strip_size_kb": 0, 00:13:34.602 "state": "online", 00:13:34.602 "raid_level": "raid1", 00:13:34.602 "superblock": true, 00:13:34.602 "num_base_bdevs": 4, 00:13:34.602 "num_base_bdevs_discovered": 4, 00:13:34.602 "num_base_bdevs_operational": 4, 00:13:34.602 "base_bdevs_list": [ 00:13:34.602 { 00:13:34.602 "name": "BaseBdev1", 00:13:34.602 "uuid": "2f913010-c618-46cf-854c-ba949d53941e", 00:13:34.602 "is_configured": true, 00:13:34.602 "data_offset": 2048, 00:13:34.602 "data_size": 63488 00:13:34.602 }, 00:13:34.602 { 00:13:34.602 "name": "BaseBdev2", 00:13:34.602 "uuid": "3ec19481-9704-4d68-b0ad-86e9de1e927f", 00:13:34.602 "is_configured": true, 00:13:34.602 "data_offset": 2048, 00:13:34.602 "data_size": 63488 00:13:34.602 }, 00:13:34.602 { 00:13:34.602 "name": "BaseBdev3", 00:13:34.602 "uuid": "ed0982be-2b39-49d0-9e1d-1a1fd57f4e11", 00:13:34.602 "is_configured": true, 00:13:34.602 "data_offset": 2048, 00:13:34.602 "data_size": 63488 00:13:34.602 }, 00:13:34.602 { 00:13:34.602 "name": "BaseBdev4", 00:13:34.602 "uuid": "dda2c2ec-dc34-40e2-a9ed-4220a3f84bc6", 00:13:34.602 "is_configured": true, 00:13:34.602 "data_offset": 2048, 00:13:34.602 "data_size": 63488 00:13:34.602 } 00:13:34.602 ] 00:13:34.602 }' 00:13:34.602 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.602 16:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.861 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:34.861 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:34.861 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:34.861 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:34.861 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:34.861 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:34.861 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:34.861 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:34.862 16:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.862 16:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.862 [2024-12-06 16:29:16.651483] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:34.862 16:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.862 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:34.862 "name": "Existed_Raid", 00:13:34.862 "aliases": [ 00:13:34.862 "91492ba9-d4d0-4883-91d3-744e8e603e28" 00:13:34.862 ], 00:13:34.862 "product_name": "Raid Volume", 00:13:34.862 "block_size": 512, 00:13:34.862 "num_blocks": 63488, 00:13:34.862 "uuid": "91492ba9-d4d0-4883-91d3-744e8e603e28", 00:13:34.862 "assigned_rate_limits": { 00:13:34.862 "rw_ios_per_sec": 0, 00:13:34.862 "rw_mbytes_per_sec": 0, 00:13:34.862 "r_mbytes_per_sec": 0, 00:13:34.862 "w_mbytes_per_sec": 0 00:13:34.862 }, 00:13:34.862 "claimed": false, 00:13:34.862 "zoned": false, 00:13:34.862 "supported_io_types": { 00:13:34.862 "read": true, 00:13:34.862 "write": true, 00:13:34.862 "unmap": false, 00:13:34.862 "flush": false, 00:13:34.862 "reset": true, 00:13:34.862 "nvme_admin": false, 00:13:34.862 "nvme_io": false, 00:13:34.862 "nvme_io_md": false, 00:13:34.862 "write_zeroes": true, 00:13:34.862 "zcopy": false, 00:13:34.862 "get_zone_info": false, 00:13:34.862 "zone_management": false, 00:13:34.862 "zone_append": false, 00:13:34.862 "compare": false, 00:13:34.862 "compare_and_write": false, 00:13:34.862 "abort": false, 00:13:34.862 "seek_hole": false, 00:13:34.862 "seek_data": false, 00:13:34.862 "copy": false, 00:13:34.862 "nvme_iov_md": false 00:13:34.862 }, 00:13:34.862 "memory_domains": [ 00:13:34.862 { 00:13:34.862 "dma_device_id": "system", 00:13:34.862 "dma_device_type": 1 00:13:34.862 }, 00:13:34.862 { 00:13:34.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.862 "dma_device_type": 2 00:13:34.862 }, 00:13:34.862 { 00:13:34.862 "dma_device_id": "system", 00:13:34.862 "dma_device_type": 1 00:13:34.862 }, 00:13:34.862 { 00:13:34.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.862 "dma_device_type": 2 00:13:34.862 }, 00:13:34.862 { 00:13:34.862 "dma_device_id": "system", 00:13:34.862 "dma_device_type": 1 00:13:34.862 }, 00:13:34.862 { 00:13:34.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.862 "dma_device_type": 2 00:13:34.862 }, 00:13:34.862 { 00:13:34.862 "dma_device_id": "system", 00:13:34.862 "dma_device_type": 1 00:13:34.862 }, 00:13:34.862 { 00:13:34.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.862 "dma_device_type": 2 00:13:34.862 } 00:13:34.862 ], 00:13:34.862 "driver_specific": { 00:13:34.862 "raid": { 00:13:34.862 "uuid": "91492ba9-d4d0-4883-91d3-744e8e603e28", 00:13:34.862 "strip_size_kb": 0, 00:13:34.862 "state": "online", 00:13:34.862 "raid_level": "raid1", 00:13:34.862 "superblock": true, 00:13:34.862 "num_base_bdevs": 4, 00:13:34.862 "num_base_bdevs_discovered": 4, 00:13:34.862 "num_base_bdevs_operational": 4, 00:13:34.862 "base_bdevs_list": [ 00:13:34.862 { 00:13:34.862 "name": "BaseBdev1", 00:13:34.862 "uuid": "2f913010-c618-46cf-854c-ba949d53941e", 00:13:34.862 "is_configured": true, 00:13:34.862 "data_offset": 2048, 00:13:34.862 "data_size": 63488 00:13:34.862 }, 00:13:34.862 { 00:13:34.862 "name": "BaseBdev2", 00:13:34.862 "uuid": "3ec19481-9704-4d68-b0ad-86e9de1e927f", 00:13:34.862 "is_configured": true, 00:13:34.862 "data_offset": 2048, 00:13:34.862 "data_size": 63488 00:13:34.862 }, 00:13:34.862 { 00:13:34.862 "name": "BaseBdev3", 00:13:34.862 "uuid": "ed0982be-2b39-49d0-9e1d-1a1fd57f4e11", 00:13:34.862 "is_configured": true, 00:13:34.862 "data_offset": 2048, 00:13:34.862 "data_size": 63488 00:13:34.862 }, 00:13:34.862 { 00:13:34.862 "name": "BaseBdev4", 00:13:34.862 "uuid": "dda2c2ec-dc34-40e2-a9ed-4220a3f84bc6", 00:13:34.862 "is_configured": true, 00:13:34.862 "data_offset": 2048, 00:13:34.862 "data_size": 63488 00:13:34.862 } 00:13:34.862 ] 00:13:34.862 } 00:13:34.862 } 00:13:34.862 }' 00:13:34.862 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:35.121 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:35.121 BaseBdev2 00:13:35.121 BaseBdev3 00:13:35.121 BaseBdev4' 00:13:35.121 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:35.121 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:35.121 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:35.121 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:35.121 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:35.121 16:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.121 16:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.121 16:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.121 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:35.121 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:35.121 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:35.121 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:35.121 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:35.121 16:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.121 16:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.121 16:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.121 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:35.121 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:35.121 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:35.121 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:35.121 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:35.121 16:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.121 16:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.121 16:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.121 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:35.121 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:35.121 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:35.121 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:35.121 16:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.121 16:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.121 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:35.121 16:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.380 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:35.380 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:35.380 16:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:35.380 16:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.380 16:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.380 [2024-12-06 16:29:16.994595] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:35.380 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.380 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:35.380 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:35.380 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:35.380 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:13:35.380 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:35.380 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:35.380 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:35.380 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.380 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.380 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.380 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:35.380 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.380 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.380 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.380 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.380 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:35.380 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.380 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.380 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.380 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.380 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.380 "name": "Existed_Raid", 00:13:35.380 "uuid": "91492ba9-d4d0-4883-91d3-744e8e603e28", 00:13:35.380 "strip_size_kb": 0, 00:13:35.380 "state": "online", 00:13:35.380 "raid_level": "raid1", 00:13:35.380 "superblock": true, 00:13:35.380 "num_base_bdevs": 4, 00:13:35.380 "num_base_bdevs_discovered": 3, 00:13:35.380 "num_base_bdevs_operational": 3, 00:13:35.380 "base_bdevs_list": [ 00:13:35.380 { 00:13:35.380 "name": null, 00:13:35.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.380 "is_configured": false, 00:13:35.380 "data_offset": 0, 00:13:35.380 "data_size": 63488 00:13:35.380 }, 00:13:35.380 { 00:13:35.380 "name": "BaseBdev2", 00:13:35.380 "uuid": "3ec19481-9704-4d68-b0ad-86e9de1e927f", 00:13:35.380 "is_configured": true, 00:13:35.380 "data_offset": 2048, 00:13:35.380 "data_size": 63488 00:13:35.380 }, 00:13:35.380 { 00:13:35.380 "name": "BaseBdev3", 00:13:35.380 "uuid": "ed0982be-2b39-49d0-9e1d-1a1fd57f4e11", 00:13:35.380 "is_configured": true, 00:13:35.380 "data_offset": 2048, 00:13:35.380 "data_size": 63488 00:13:35.380 }, 00:13:35.380 { 00:13:35.380 "name": "BaseBdev4", 00:13:35.380 "uuid": "dda2c2ec-dc34-40e2-a9ed-4220a3f84bc6", 00:13:35.381 "is_configured": true, 00:13:35.381 "data_offset": 2048, 00:13:35.381 "data_size": 63488 00:13:35.381 } 00:13:35.381 ] 00:13:35.381 }' 00:13:35.381 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.381 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.639 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:35.639 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:35.639 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:35.639 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.639 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.639 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.639 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.639 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:35.639 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:35.639 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:35.639 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.639 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.639 [2024-12-06 16:29:17.446191] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:35.639 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.639 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:35.639 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:35.639 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.639 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:35.639 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.639 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.898 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.898 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:35.898 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:35.898 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:35.898 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.898 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.898 [2024-12-06 16:29:17.514082] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:35.898 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.898 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:35.898 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:35.898 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.898 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.898 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.898 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:35.898 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.898 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:35.898 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:35.898 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:35.898 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.898 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.898 [2024-12-06 16:29:17.586047] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:35.898 [2024-12-06 16:29:17.586165] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:35.898 [2024-12-06 16:29:17.598408] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:35.898 [2024-12-06 16:29:17.598475] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:35.898 [2024-12-06 16:29:17.598488] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:13:35.898 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.898 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:35.898 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:35.898 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.898 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:35.898 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.898 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.898 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.898 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:35.898 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:35.898 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:35.898 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:35.899 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:35.899 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:35.899 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.899 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.899 BaseBdev2 00:13:35.899 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.899 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:35.899 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:35.899 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:35.899 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:35.899 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:35.899 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:35.899 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:35.899 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.899 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.899 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.899 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:35.899 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.899 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.899 [ 00:13:35.899 { 00:13:35.899 "name": "BaseBdev2", 00:13:35.899 "aliases": [ 00:13:35.899 "421ce935-0386-4204-a012-eee0ef0bc70b" 00:13:35.899 ], 00:13:35.899 "product_name": "Malloc disk", 00:13:35.899 "block_size": 512, 00:13:35.899 "num_blocks": 65536, 00:13:35.899 "uuid": "421ce935-0386-4204-a012-eee0ef0bc70b", 00:13:35.899 "assigned_rate_limits": { 00:13:35.899 "rw_ios_per_sec": 0, 00:13:35.899 "rw_mbytes_per_sec": 0, 00:13:35.899 "r_mbytes_per_sec": 0, 00:13:35.899 "w_mbytes_per_sec": 0 00:13:35.899 }, 00:13:35.899 "claimed": false, 00:13:35.899 "zoned": false, 00:13:35.899 "supported_io_types": { 00:13:35.899 "read": true, 00:13:35.899 "write": true, 00:13:35.899 "unmap": true, 00:13:35.899 "flush": true, 00:13:35.899 "reset": true, 00:13:35.899 "nvme_admin": false, 00:13:35.899 "nvme_io": false, 00:13:35.899 "nvme_io_md": false, 00:13:35.899 "write_zeroes": true, 00:13:35.899 "zcopy": true, 00:13:35.899 "get_zone_info": false, 00:13:35.899 "zone_management": false, 00:13:35.899 "zone_append": false, 00:13:35.899 "compare": false, 00:13:35.899 "compare_and_write": false, 00:13:35.899 "abort": true, 00:13:35.899 "seek_hole": false, 00:13:35.899 "seek_data": false, 00:13:35.899 "copy": true, 00:13:35.899 "nvme_iov_md": false 00:13:35.899 }, 00:13:35.899 "memory_domains": [ 00:13:35.899 { 00:13:35.899 "dma_device_id": "system", 00:13:35.899 "dma_device_type": 1 00:13:35.899 }, 00:13:35.899 { 00:13:35.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.899 "dma_device_type": 2 00:13:35.899 } 00:13:35.899 ], 00:13:35.899 "driver_specific": {} 00:13:35.899 } 00:13:35.899 ] 00:13:35.899 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.899 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:35.899 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:35.899 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:35.899 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:35.899 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.899 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.899 BaseBdev3 00:13:35.899 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.899 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:35.899 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:35.899 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:35.899 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:35.899 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:35.899 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:35.899 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:35.899 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.899 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.899 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.899 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:35.899 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.899 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.899 [ 00:13:35.899 { 00:13:35.899 "name": "BaseBdev3", 00:13:35.899 "aliases": [ 00:13:35.899 "7a00a9be-4aef-424a-8621-8b1bb99d8f16" 00:13:36.158 ], 00:13:36.159 "product_name": "Malloc disk", 00:13:36.159 "block_size": 512, 00:13:36.159 "num_blocks": 65536, 00:13:36.159 "uuid": "7a00a9be-4aef-424a-8621-8b1bb99d8f16", 00:13:36.159 "assigned_rate_limits": { 00:13:36.159 "rw_ios_per_sec": 0, 00:13:36.159 "rw_mbytes_per_sec": 0, 00:13:36.159 "r_mbytes_per_sec": 0, 00:13:36.159 "w_mbytes_per_sec": 0 00:13:36.159 }, 00:13:36.159 "claimed": false, 00:13:36.159 "zoned": false, 00:13:36.159 "supported_io_types": { 00:13:36.159 "read": true, 00:13:36.159 "write": true, 00:13:36.159 "unmap": true, 00:13:36.159 "flush": true, 00:13:36.159 "reset": true, 00:13:36.159 "nvme_admin": false, 00:13:36.159 "nvme_io": false, 00:13:36.159 "nvme_io_md": false, 00:13:36.159 "write_zeroes": true, 00:13:36.159 "zcopy": true, 00:13:36.159 "get_zone_info": false, 00:13:36.159 "zone_management": false, 00:13:36.159 "zone_append": false, 00:13:36.159 "compare": false, 00:13:36.159 "compare_and_write": false, 00:13:36.159 "abort": true, 00:13:36.159 "seek_hole": false, 00:13:36.159 "seek_data": false, 00:13:36.159 "copy": true, 00:13:36.159 "nvme_iov_md": false 00:13:36.159 }, 00:13:36.159 "memory_domains": [ 00:13:36.159 { 00:13:36.159 "dma_device_id": "system", 00:13:36.159 "dma_device_type": 1 00:13:36.159 }, 00:13:36.159 { 00:13:36.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.159 "dma_device_type": 2 00:13:36.159 } 00:13:36.159 ], 00:13:36.159 "driver_specific": {} 00:13:36.159 } 00:13:36.159 ] 00:13:36.159 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.159 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:36.159 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:36.159 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:36.159 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:36.159 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.159 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.159 BaseBdev4 00:13:36.159 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.159 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:36.159 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:36.159 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:36.159 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:36.159 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:36.159 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:36.159 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:36.159 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.159 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.159 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.159 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:36.159 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.159 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.159 [ 00:13:36.159 { 00:13:36.159 "name": "BaseBdev4", 00:13:36.159 "aliases": [ 00:13:36.159 "e1a5a98f-8d00-4a28-a01a-2ca3d390b40b" 00:13:36.159 ], 00:13:36.159 "product_name": "Malloc disk", 00:13:36.159 "block_size": 512, 00:13:36.159 "num_blocks": 65536, 00:13:36.159 "uuid": "e1a5a98f-8d00-4a28-a01a-2ca3d390b40b", 00:13:36.159 "assigned_rate_limits": { 00:13:36.159 "rw_ios_per_sec": 0, 00:13:36.159 "rw_mbytes_per_sec": 0, 00:13:36.159 "r_mbytes_per_sec": 0, 00:13:36.159 "w_mbytes_per_sec": 0 00:13:36.159 }, 00:13:36.159 "claimed": false, 00:13:36.159 "zoned": false, 00:13:36.159 "supported_io_types": { 00:13:36.159 "read": true, 00:13:36.159 "write": true, 00:13:36.159 "unmap": true, 00:13:36.159 "flush": true, 00:13:36.159 "reset": true, 00:13:36.159 "nvme_admin": false, 00:13:36.159 "nvme_io": false, 00:13:36.159 "nvme_io_md": false, 00:13:36.159 "write_zeroes": true, 00:13:36.159 "zcopy": true, 00:13:36.159 "get_zone_info": false, 00:13:36.159 "zone_management": false, 00:13:36.159 "zone_append": false, 00:13:36.159 "compare": false, 00:13:36.159 "compare_and_write": false, 00:13:36.159 "abort": true, 00:13:36.159 "seek_hole": false, 00:13:36.159 "seek_data": false, 00:13:36.159 "copy": true, 00:13:36.159 "nvme_iov_md": false 00:13:36.159 }, 00:13:36.159 "memory_domains": [ 00:13:36.159 { 00:13:36.159 "dma_device_id": "system", 00:13:36.159 "dma_device_type": 1 00:13:36.159 }, 00:13:36.159 { 00:13:36.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.159 "dma_device_type": 2 00:13:36.159 } 00:13:36.159 ], 00:13:36.159 "driver_specific": {} 00:13:36.159 } 00:13:36.159 ] 00:13:36.159 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.159 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:36.159 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:36.159 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:36.159 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:36.159 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.159 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.159 [2024-12-06 16:29:17.797278] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:36.159 [2024-12-06 16:29:17.797386] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:36.159 [2024-12-06 16:29:17.797439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:36.159 [2024-12-06 16:29:17.799702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:36.159 [2024-12-06 16:29:17.799821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:36.159 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.159 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:36.159 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:36.159 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:36.159 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:36.159 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:36.159 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:36.159 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.159 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.159 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.159 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.159 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.159 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:36.159 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.159 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.159 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.159 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.159 "name": "Existed_Raid", 00:13:36.159 "uuid": "ea50bff1-56ce-4aaa-8437-aea44876c740", 00:13:36.159 "strip_size_kb": 0, 00:13:36.159 "state": "configuring", 00:13:36.159 "raid_level": "raid1", 00:13:36.159 "superblock": true, 00:13:36.159 "num_base_bdevs": 4, 00:13:36.159 "num_base_bdevs_discovered": 3, 00:13:36.159 "num_base_bdevs_operational": 4, 00:13:36.159 "base_bdevs_list": [ 00:13:36.159 { 00:13:36.159 "name": "BaseBdev1", 00:13:36.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.159 "is_configured": false, 00:13:36.159 "data_offset": 0, 00:13:36.159 "data_size": 0 00:13:36.159 }, 00:13:36.159 { 00:13:36.159 "name": "BaseBdev2", 00:13:36.159 "uuid": "421ce935-0386-4204-a012-eee0ef0bc70b", 00:13:36.159 "is_configured": true, 00:13:36.159 "data_offset": 2048, 00:13:36.159 "data_size": 63488 00:13:36.159 }, 00:13:36.159 { 00:13:36.159 "name": "BaseBdev3", 00:13:36.159 "uuid": "7a00a9be-4aef-424a-8621-8b1bb99d8f16", 00:13:36.159 "is_configured": true, 00:13:36.159 "data_offset": 2048, 00:13:36.159 "data_size": 63488 00:13:36.159 }, 00:13:36.159 { 00:13:36.159 "name": "BaseBdev4", 00:13:36.159 "uuid": "e1a5a98f-8d00-4a28-a01a-2ca3d390b40b", 00:13:36.159 "is_configured": true, 00:13:36.159 "data_offset": 2048, 00:13:36.159 "data_size": 63488 00:13:36.159 } 00:13:36.159 ] 00:13:36.159 }' 00:13:36.159 16:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.160 16:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.727 16:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:36.728 16:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.728 16:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.728 [2024-12-06 16:29:18.288443] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:36.728 16:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.728 16:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:36.728 16:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:36.728 16:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:36.728 16:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:36.728 16:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:36.728 16:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:36.728 16:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.728 16:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.728 16:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.728 16:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.728 16:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.728 16:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.728 16:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.728 16:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:36.728 16:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.728 16:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.728 "name": "Existed_Raid", 00:13:36.728 "uuid": "ea50bff1-56ce-4aaa-8437-aea44876c740", 00:13:36.728 "strip_size_kb": 0, 00:13:36.728 "state": "configuring", 00:13:36.728 "raid_level": "raid1", 00:13:36.728 "superblock": true, 00:13:36.728 "num_base_bdevs": 4, 00:13:36.728 "num_base_bdevs_discovered": 2, 00:13:36.728 "num_base_bdevs_operational": 4, 00:13:36.728 "base_bdevs_list": [ 00:13:36.728 { 00:13:36.728 "name": "BaseBdev1", 00:13:36.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.728 "is_configured": false, 00:13:36.728 "data_offset": 0, 00:13:36.728 "data_size": 0 00:13:36.728 }, 00:13:36.728 { 00:13:36.728 "name": null, 00:13:36.728 "uuid": "421ce935-0386-4204-a012-eee0ef0bc70b", 00:13:36.728 "is_configured": false, 00:13:36.728 "data_offset": 0, 00:13:36.728 "data_size": 63488 00:13:36.728 }, 00:13:36.728 { 00:13:36.728 "name": "BaseBdev3", 00:13:36.728 "uuid": "7a00a9be-4aef-424a-8621-8b1bb99d8f16", 00:13:36.728 "is_configured": true, 00:13:36.728 "data_offset": 2048, 00:13:36.728 "data_size": 63488 00:13:36.728 }, 00:13:36.728 { 00:13:36.728 "name": "BaseBdev4", 00:13:36.728 "uuid": "e1a5a98f-8d00-4a28-a01a-2ca3d390b40b", 00:13:36.728 "is_configured": true, 00:13:36.728 "data_offset": 2048, 00:13:36.728 "data_size": 63488 00:13:36.728 } 00:13:36.728 ] 00:13:36.728 }' 00:13:36.728 16:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.728 16:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.987 16:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.987 16:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.987 16:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.987 16:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:36.987 16:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.987 16:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:36.987 16:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:36.987 16:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.987 16:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.987 [2024-12-06 16:29:18.803150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:36.987 BaseBdev1 00:13:36.987 16:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.987 16:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:36.987 16:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:36.987 16:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:36.987 16:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:36.987 16:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:36.987 16:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:36.987 16:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:36.987 16:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.987 16:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.987 16:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.987 16:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:36.987 16:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.987 16:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.246 [ 00:13:37.246 { 00:13:37.246 "name": "BaseBdev1", 00:13:37.246 "aliases": [ 00:13:37.246 "88cb22e2-3639-440e-af9d-a07098936901" 00:13:37.246 ], 00:13:37.246 "product_name": "Malloc disk", 00:13:37.246 "block_size": 512, 00:13:37.246 "num_blocks": 65536, 00:13:37.246 "uuid": "88cb22e2-3639-440e-af9d-a07098936901", 00:13:37.246 "assigned_rate_limits": { 00:13:37.246 "rw_ios_per_sec": 0, 00:13:37.246 "rw_mbytes_per_sec": 0, 00:13:37.246 "r_mbytes_per_sec": 0, 00:13:37.246 "w_mbytes_per_sec": 0 00:13:37.246 }, 00:13:37.246 "claimed": true, 00:13:37.246 "claim_type": "exclusive_write", 00:13:37.246 "zoned": false, 00:13:37.246 "supported_io_types": { 00:13:37.246 "read": true, 00:13:37.246 "write": true, 00:13:37.246 "unmap": true, 00:13:37.246 "flush": true, 00:13:37.246 "reset": true, 00:13:37.246 "nvme_admin": false, 00:13:37.246 "nvme_io": false, 00:13:37.246 "nvme_io_md": false, 00:13:37.246 "write_zeroes": true, 00:13:37.246 "zcopy": true, 00:13:37.246 "get_zone_info": false, 00:13:37.246 "zone_management": false, 00:13:37.246 "zone_append": false, 00:13:37.246 "compare": false, 00:13:37.246 "compare_and_write": false, 00:13:37.246 "abort": true, 00:13:37.246 "seek_hole": false, 00:13:37.246 "seek_data": false, 00:13:37.246 "copy": true, 00:13:37.246 "nvme_iov_md": false 00:13:37.246 }, 00:13:37.246 "memory_domains": [ 00:13:37.246 { 00:13:37.246 "dma_device_id": "system", 00:13:37.246 "dma_device_type": 1 00:13:37.246 }, 00:13:37.246 { 00:13:37.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.246 "dma_device_type": 2 00:13:37.246 } 00:13:37.246 ], 00:13:37.246 "driver_specific": {} 00:13:37.246 } 00:13:37.246 ] 00:13:37.246 16:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.246 16:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:37.246 16:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:37.246 16:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:37.246 16:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:37.246 16:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:37.246 16:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:37.246 16:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:37.246 16:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.246 16:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.246 16:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.246 16:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.246 16:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.246 16:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:37.246 16:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.246 16:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.246 16:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.246 16:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.246 "name": "Existed_Raid", 00:13:37.246 "uuid": "ea50bff1-56ce-4aaa-8437-aea44876c740", 00:13:37.246 "strip_size_kb": 0, 00:13:37.246 "state": "configuring", 00:13:37.246 "raid_level": "raid1", 00:13:37.246 "superblock": true, 00:13:37.246 "num_base_bdevs": 4, 00:13:37.246 "num_base_bdevs_discovered": 3, 00:13:37.246 "num_base_bdevs_operational": 4, 00:13:37.246 "base_bdevs_list": [ 00:13:37.246 { 00:13:37.246 "name": "BaseBdev1", 00:13:37.246 "uuid": "88cb22e2-3639-440e-af9d-a07098936901", 00:13:37.246 "is_configured": true, 00:13:37.246 "data_offset": 2048, 00:13:37.246 "data_size": 63488 00:13:37.246 }, 00:13:37.246 { 00:13:37.246 "name": null, 00:13:37.246 "uuid": "421ce935-0386-4204-a012-eee0ef0bc70b", 00:13:37.246 "is_configured": false, 00:13:37.246 "data_offset": 0, 00:13:37.246 "data_size": 63488 00:13:37.246 }, 00:13:37.246 { 00:13:37.246 "name": "BaseBdev3", 00:13:37.246 "uuid": "7a00a9be-4aef-424a-8621-8b1bb99d8f16", 00:13:37.246 "is_configured": true, 00:13:37.246 "data_offset": 2048, 00:13:37.246 "data_size": 63488 00:13:37.246 }, 00:13:37.246 { 00:13:37.246 "name": "BaseBdev4", 00:13:37.246 "uuid": "e1a5a98f-8d00-4a28-a01a-2ca3d390b40b", 00:13:37.246 "is_configured": true, 00:13:37.246 "data_offset": 2048, 00:13:37.246 "data_size": 63488 00:13:37.246 } 00:13:37.246 ] 00:13:37.246 }' 00:13:37.246 16:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.246 16:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.504 16:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:37.504 16:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.504 16:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.504 16:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.504 16:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.504 16:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:37.504 16:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:37.504 16:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.504 16:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.504 [2024-12-06 16:29:19.330365] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:37.504 16:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.504 16:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:37.504 16:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:37.504 16:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:37.504 16:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:37.504 16:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:37.505 16:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:37.505 16:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.505 16:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.505 16:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.505 16:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.763 16:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.763 16:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.763 16:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:37.763 16:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.763 16:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.763 16:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.763 "name": "Existed_Raid", 00:13:37.763 "uuid": "ea50bff1-56ce-4aaa-8437-aea44876c740", 00:13:37.763 "strip_size_kb": 0, 00:13:37.763 "state": "configuring", 00:13:37.763 "raid_level": "raid1", 00:13:37.763 "superblock": true, 00:13:37.763 "num_base_bdevs": 4, 00:13:37.763 "num_base_bdevs_discovered": 2, 00:13:37.763 "num_base_bdevs_operational": 4, 00:13:37.763 "base_bdevs_list": [ 00:13:37.763 { 00:13:37.763 "name": "BaseBdev1", 00:13:37.763 "uuid": "88cb22e2-3639-440e-af9d-a07098936901", 00:13:37.763 "is_configured": true, 00:13:37.763 "data_offset": 2048, 00:13:37.763 "data_size": 63488 00:13:37.763 }, 00:13:37.763 { 00:13:37.763 "name": null, 00:13:37.763 "uuid": "421ce935-0386-4204-a012-eee0ef0bc70b", 00:13:37.763 "is_configured": false, 00:13:37.763 "data_offset": 0, 00:13:37.763 "data_size": 63488 00:13:37.763 }, 00:13:37.763 { 00:13:37.763 "name": null, 00:13:37.763 "uuid": "7a00a9be-4aef-424a-8621-8b1bb99d8f16", 00:13:37.763 "is_configured": false, 00:13:37.763 "data_offset": 0, 00:13:37.763 "data_size": 63488 00:13:37.763 }, 00:13:37.763 { 00:13:37.763 "name": "BaseBdev4", 00:13:37.763 "uuid": "e1a5a98f-8d00-4a28-a01a-2ca3d390b40b", 00:13:37.763 "is_configured": true, 00:13:37.763 "data_offset": 2048, 00:13:37.763 "data_size": 63488 00:13:37.763 } 00:13:37.763 ] 00:13:37.763 }' 00:13:37.763 16:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.763 16:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.022 16:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.022 16:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:38.022 16:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.022 16:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.022 16:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.022 16:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:38.022 16:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:38.022 16:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.022 16:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.281 [2024-12-06 16:29:19.865449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:38.281 16:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.281 16:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:38.281 16:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:38.281 16:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:38.281 16:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:38.281 16:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:38.281 16:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:38.281 16:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.281 16:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.281 16:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.281 16:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.281 16:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.281 16:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:38.281 16:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.281 16:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.281 16:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.281 16:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.281 "name": "Existed_Raid", 00:13:38.281 "uuid": "ea50bff1-56ce-4aaa-8437-aea44876c740", 00:13:38.281 "strip_size_kb": 0, 00:13:38.282 "state": "configuring", 00:13:38.282 "raid_level": "raid1", 00:13:38.282 "superblock": true, 00:13:38.282 "num_base_bdevs": 4, 00:13:38.282 "num_base_bdevs_discovered": 3, 00:13:38.282 "num_base_bdevs_operational": 4, 00:13:38.282 "base_bdevs_list": [ 00:13:38.282 { 00:13:38.282 "name": "BaseBdev1", 00:13:38.282 "uuid": "88cb22e2-3639-440e-af9d-a07098936901", 00:13:38.282 "is_configured": true, 00:13:38.282 "data_offset": 2048, 00:13:38.282 "data_size": 63488 00:13:38.282 }, 00:13:38.282 { 00:13:38.282 "name": null, 00:13:38.282 "uuid": "421ce935-0386-4204-a012-eee0ef0bc70b", 00:13:38.282 "is_configured": false, 00:13:38.282 "data_offset": 0, 00:13:38.282 "data_size": 63488 00:13:38.282 }, 00:13:38.282 { 00:13:38.282 "name": "BaseBdev3", 00:13:38.282 "uuid": "7a00a9be-4aef-424a-8621-8b1bb99d8f16", 00:13:38.282 "is_configured": true, 00:13:38.282 "data_offset": 2048, 00:13:38.282 "data_size": 63488 00:13:38.282 }, 00:13:38.282 { 00:13:38.282 "name": "BaseBdev4", 00:13:38.282 "uuid": "e1a5a98f-8d00-4a28-a01a-2ca3d390b40b", 00:13:38.282 "is_configured": true, 00:13:38.282 "data_offset": 2048, 00:13:38.282 "data_size": 63488 00:13:38.282 } 00:13:38.282 ] 00:13:38.282 }' 00:13:38.282 16:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.282 16:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.540 16:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:38.540 16:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.540 16:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.540 16:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.540 16:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.540 16:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:38.540 16:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:38.540 16:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.540 16:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.540 [2024-12-06 16:29:20.332668] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:38.540 16:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.540 16:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:38.540 16:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:38.540 16:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:38.540 16:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:38.540 16:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:38.540 16:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:38.540 16:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.540 16:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.540 16:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.540 16:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.540 16:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.540 16:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:38.540 16:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.540 16:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.541 16:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.799 16:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.799 "name": "Existed_Raid", 00:13:38.799 "uuid": "ea50bff1-56ce-4aaa-8437-aea44876c740", 00:13:38.799 "strip_size_kb": 0, 00:13:38.799 "state": "configuring", 00:13:38.799 "raid_level": "raid1", 00:13:38.799 "superblock": true, 00:13:38.799 "num_base_bdevs": 4, 00:13:38.799 "num_base_bdevs_discovered": 2, 00:13:38.799 "num_base_bdevs_operational": 4, 00:13:38.799 "base_bdevs_list": [ 00:13:38.799 { 00:13:38.799 "name": null, 00:13:38.799 "uuid": "88cb22e2-3639-440e-af9d-a07098936901", 00:13:38.799 "is_configured": false, 00:13:38.799 "data_offset": 0, 00:13:38.799 "data_size": 63488 00:13:38.799 }, 00:13:38.799 { 00:13:38.799 "name": null, 00:13:38.799 "uuid": "421ce935-0386-4204-a012-eee0ef0bc70b", 00:13:38.799 "is_configured": false, 00:13:38.799 "data_offset": 0, 00:13:38.799 "data_size": 63488 00:13:38.799 }, 00:13:38.799 { 00:13:38.799 "name": "BaseBdev3", 00:13:38.799 "uuid": "7a00a9be-4aef-424a-8621-8b1bb99d8f16", 00:13:38.799 "is_configured": true, 00:13:38.799 "data_offset": 2048, 00:13:38.799 "data_size": 63488 00:13:38.799 }, 00:13:38.799 { 00:13:38.799 "name": "BaseBdev4", 00:13:38.799 "uuid": "e1a5a98f-8d00-4a28-a01a-2ca3d390b40b", 00:13:38.799 "is_configured": true, 00:13:38.799 "data_offset": 2048, 00:13:38.799 "data_size": 63488 00:13:38.799 } 00:13:38.799 ] 00:13:38.799 }' 00:13:38.799 16:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.799 16:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.064 16:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:39.064 16:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.064 16:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.064 16:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.064 16:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.064 16:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:39.064 16:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:39.064 16:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.064 16:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.064 [2024-12-06 16:29:20.854580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:39.064 16:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.064 16:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:39.064 16:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:39.064 16:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:39.064 16:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:39.064 16:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:39.064 16:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:39.064 16:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.064 16:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.064 16:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.064 16:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.064 16:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.064 16:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.064 16:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.064 16:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:39.064 16:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.330 16:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.330 "name": "Existed_Raid", 00:13:39.330 "uuid": "ea50bff1-56ce-4aaa-8437-aea44876c740", 00:13:39.330 "strip_size_kb": 0, 00:13:39.330 "state": "configuring", 00:13:39.330 "raid_level": "raid1", 00:13:39.330 "superblock": true, 00:13:39.330 "num_base_bdevs": 4, 00:13:39.330 "num_base_bdevs_discovered": 3, 00:13:39.330 "num_base_bdevs_operational": 4, 00:13:39.330 "base_bdevs_list": [ 00:13:39.330 { 00:13:39.330 "name": null, 00:13:39.330 "uuid": "88cb22e2-3639-440e-af9d-a07098936901", 00:13:39.330 "is_configured": false, 00:13:39.330 "data_offset": 0, 00:13:39.330 "data_size": 63488 00:13:39.330 }, 00:13:39.330 { 00:13:39.330 "name": "BaseBdev2", 00:13:39.330 "uuid": "421ce935-0386-4204-a012-eee0ef0bc70b", 00:13:39.330 "is_configured": true, 00:13:39.330 "data_offset": 2048, 00:13:39.330 "data_size": 63488 00:13:39.330 }, 00:13:39.330 { 00:13:39.330 "name": "BaseBdev3", 00:13:39.330 "uuid": "7a00a9be-4aef-424a-8621-8b1bb99d8f16", 00:13:39.330 "is_configured": true, 00:13:39.330 "data_offset": 2048, 00:13:39.330 "data_size": 63488 00:13:39.330 }, 00:13:39.330 { 00:13:39.330 "name": "BaseBdev4", 00:13:39.330 "uuid": "e1a5a98f-8d00-4a28-a01a-2ca3d390b40b", 00:13:39.330 "is_configured": true, 00:13:39.330 "data_offset": 2048, 00:13:39.330 "data_size": 63488 00:13:39.330 } 00:13:39.330 ] 00:13:39.330 }' 00:13:39.330 16:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.330 16:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.588 16:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.588 16:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.588 16:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.588 16:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:39.588 16:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.588 16:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:39.588 16:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.588 16:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.588 16:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.588 16:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:39.588 16:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.588 16:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 88cb22e2-3639-440e-af9d-a07098936901 00:13:39.588 16:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.589 16:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.589 [2024-12-06 16:29:21.392880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:39.589 NewBaseBdev 00:13:39.589 [2024-12-06 16:29:21.393148] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:13:39.589 [2024-12-06 16:29:21.393171] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:39.589 [2024-12-06 16:29:21.393488] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:39.589 [2024-12-06 16:29:21.393626] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:13:39.589 [2024-12-06 16:29:21.393637] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:13:39.589 [2024-12-06 16:29:21.393748] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:39.589 16:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.589 16:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:39.589 16:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:39.589 16:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:39.589 16:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:39.589 16:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:39.589 16:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:39.589 16:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:39.589 16:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.589 16:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.589 16:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.589 16:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:39.589 16:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.589 16:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.589 [ 00:13:39.589 { 00:13:39.589 "name": "NewBaseBdev", 00:13:39.589 "aliases": [ 00:13:39.589 "88cb22e2-3639-440e-af9d-a07098936901" 00:13:39.589 ], 00:13:39.589 "product_name": "Malloc disk", 00:13:39.589 "block_size": 512, 00:13:39.589 "num_blocks": 65536, 00:13:39.589 "uuid": "88cb22e2-3639-440e-af9d-a07098936901", 00:13:39.589 "assigned_rate_limits": { 00:13:39.589 "rw_ios_per_sec": 0, 00:13:39.589 "rw_mbytes_per_sec": 0, 00:13:39.589 "r_mbytes_per_sec": 0, 00:13:39.589 "w_mbytes_per_sec": 0 00:13:39.589 }, 00:13:39.589 "claimed": true, 00:13:39.589 "claim_type": "exclusive_write", 00:13:39.589 "zoned": false, 00:13:39.589 "supported_io_types": { 00:13:39.589 "read": true, 00:13:39.589 "write": true, 00:13:39.589 "unmap": true, 00:13:39.589 "flush": true, 00:13:39.589 "reset": true, 00:13:39.589 "nvme_admin": false, 00:13:39.589 "nvme_io": false, 00:13:39.589 "nvme_io_md": false, 00:13:39.589 "write_zeroes": true, 00:13:39.589 "zcopy": true, 00:13:39.589 "get_zone_info": false, 00:13:39.589 "zone_management": false, 00:13:39.589 "zone_append": false, 00:13:39.589 "compare": false, 00:13:39.589 "compare_and_write": false, 00:13:39.589 "abort": true, 00:13:39.589 "seek_hole": false, 00:13:39.589 "seek_data": false, 00:13:39.589 "copy": true, 00:13:39.589 "nvme_iov_md": false 00:13:39.589 }, 00:13:39.589 "memory_domains": [ 00:13:39.589 { 00:13:39.848 "dma_device_id": "system", 00:13:39.848 "dma_device_type": 1 00:13:39.848 }, 00:13:39.848 { 00:13:39.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.848 "dma_device_type": 2 00:13:39.848 } 00:13:39.848 ], 00:13:39.848 "driver_specific": {} 00:13:39.848 } 00:13:39.848 ] 00:13:39.848 16:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.848 16:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:39.848 16:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:39.848 16:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:39.848 16:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:39.848 16:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:39.848 16:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:39.848 16:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:39.848 16:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.848 16:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.848 16:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.848 16:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.848 16:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.848 16:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.848 16:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:39.848 16:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.848 16:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.848 16:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.848 "name": "Existed_Raid", 00:13:39.848 "uuid": "ea50bff1-56ce-4aaa-8437-aea44876c740", 00:13:39.848 "strip_size_kb": 0, 00:13:39.848 "state": "online", 00:13:39.848 "raid_level": "raid1", 00:13:39.848 "superblock": true, 00:13:39.848 "num_base_bdevs": 4, 00:13:39.848 "num_base_bdevs_discovered": 4, 00:13:39.848 "num_base_bdevs_operational": 4, 00:13:39.848 "base_bdevs_list": [ 00:13:39.848 { 00:13:39.848 "name": "NewBaseBdev", 00:13:39.848 "uuid": "88cb22e2-3639-440e-af9d-a07098936901", 00:13:39.848 "is_configured": true, 00:13:39.848 "data_offset": 2048, 00:13:39.848 "data_size": 63488 00:13:39.848 }, 00:13:39.848 { 00:13:39.848 "name": "BaseBdev2", 00:13:39.848 "uuid": "421ce935-0386-4204-a012-eee0ef0bc70b", 00:13:39.848 "is_configured": true, 00:13:39.848 "data_offset": 2048, 00:13:39.848 "data_size": 63488 00:13:39.848 }, 00:13:39.848 { 00:13:39.849 "name": "BaseBdev3", 00:13:39.849 "uuid": "7a00a9be-4aef-424a-8621-8b1bb99d8f16", 00:13:39.849 "is_configured": true, 00:13:39.849 "data_offset": 2048, 00:13:39.849 "data_size": 63488 00:13:39.849 }, 00:13:39.849 { 00:13:39.849 "name": "BaseBdev4", 00:13:39.849 "uuid": "e1a5a98f-8d00-4a28-a01a-2ca3d390b40b", 00:13:39.849 "is_configured": true, 00:13:39.849 "data_offset": 2048, 00:13:39.849 "data_size": 63488 00:13:39.849 } 00:13:39.849 ] 00:13:39.849 }' 00:13:39.849 16:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.849 16:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.107 16:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:40.107 16:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:40.107 16:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:40.107 16:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:40.107 16:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:40.107 16:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:40.107 16:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:40.107 16:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:40.107 16:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.107 16:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.107 [2024-12-06 16:29:21.928448] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:40.365 16:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.365 16:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:40.365 "name": "Existed_Raid", 00:13:40.365 "aliases": [ 00:13:40.365 "ea50bff1-56ce-4aaa-8437-aea44876c740" 00:13:40.365 ], 00:13:40.365 "product_name": "Raid Volume", 00:13:40.365 "block_size": 512, 00:13:40.365 "num_blocks": 63488, 00:13:40.365 "uuid": "ea50bff1-56ce-4aaa-8437-aea44876c740", 00:13:40.365 "assigned_rate_limits": { 00:13:40.365 "rw_ios_per_sec": 0, 00:13:40.365 "rw_mbytes_per_sec": 0, 00:13:40.365 "r_mbytes_per_sec": 0, 00:13:40.365 "w_mbytes_per_sec": 0 00:13:40.365 }, 00:13:40.365 "claimed": false, 00:13:40.365 "zoned": false, 00:13:40.365 "supported_io_types": { 00:13:40.365 "read": true, 00:13:40.365 "write": true, 00:13:40.365 "unmap": false, 00:13:40.365 "flush": false, 00:13:40.365 "reset": true, 00:13:40.365 "nvme_admin": false, 00:13:40.365 "nvme_io": false, 00:13:40.365 "nvme_io_md": false, 00:13:40.365 "write_zeroes": true, 00:13:40.365 "zcopy": false, 00:13:40.365 "get_zone_info": false, 00:13:40.365 "zone_management": false, 00:13:40.365 "zone_append": false, 00:13:40.365 "compare": false, 00:13:40.365 "compare_and_write": false, 00:13:40.365 "abort": false, 00:13:40.365 "seek_hole": false, 00:13:40.365 "seek_data": false, 00:13:40.365 "copy": false, 00:13:40.365 "nvme_iov_md": false 00:13:40.365 }, 00:13:40.365 "memory_domains": [ 00:13:40.365 { 00:13:40.365 "dma_device_id": "system", 00:13:40.365 "dma_device_type": 1 00:13:40.365 }, 00:13:40.365 { 00:13:40.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.365 "dma_device_type": 2 00:13:40.365 }, 00:13:40.365 { 00:13:40.365 "dma_device_id": "system", 00:13:40.365 "dma_device_type": 1 00:13:40.365 }, 00:13:40.365 { 00:13:40.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.365 "dma_device_type": 2 00:13:40.365 }, 00:13:40.365 { 00:13:40.365 "dma_device_id": "system", 00:13:40.365 "dma_device_type": 1 00:13:40.365 }, 00:13:40.365 { 00:13:40.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.365 "dma_device_type": 2 00:13:40.365 }, 00:13:40.365 { 00:13:40.365 "dma_device_id": "system", 00:13:40.365 "dma_device_type": 1 00:13:40.365 }, 00:13:40.365 { 00:13:40.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.365 "dma_device_type": 2 00:13:40.365 } 00:13:40.365 ], 00:13:40.365 "driver_specific": { 00:13:40.366 "raid": { 00:13:40.366 "uuid": "ea50bff1-56ce-4aaa-8437-aea44876c740", 00:13:40.366 "strip_size_kb": 0, 00:13:40.366 "state": "online", 00:13:40.366 "raid_level": "raid1", 00:13:40.366 "superblock": true, 00:13:40.366 "num_base_bdevs": 4, 00:13:40.366 "num_base_bdevs_discovered": 4, 00:13:40.366 "num_base_bdevs_operational": 4, 00:13:40.366 "base_bdevs_list": [ 00:13:40.366 { 00:13:40.366 "name": "NewBaseBdev", 00:13:40.366 "uuid": "88cb22e2-3639-440e-af9d-a07098936901", 00:13:40.366 "is_configured": true, 00:13:40.366 "data_offset": 2048, 00:13:40.366 "data_size": 63488 00:13:40.366 }, 00:13:40.366 { 00:13:40.366 "name": "BaseBdev2", 00:13:40.366 "uuid": "421ce935-0386-4204-a012-eee0ef0bc70b", 00:13:40.366 "is_configured": true, 00:13:40.366 "data_offset": 2048, 00:13:40.366 "data_size": 63488 00:13:40.366 }, 00:13:40.366 { 00:13:40.366 "name": "BaseBdev3", 00:13:40.366 "uuid": "7a00a9be-4aef-424a-8621-8b1bb99d8f16", 00:13:40.366 "is_configured": true, 00:13:40.366 "data_offset": 2048, 00:13:40.366 "data_size": 63488 00:13:40.366 }, 00:13:40.366 { 00:13:40.366 "name": "BaseBdev4", 00:13:40.366 "uuid": "e1a5a98f-8d00-4a28-a01a-2ca3d390b40b", 00:13:40.366 "is_configured": true, 00:13:40.366 "data_offset": 2048, 00:13:40.366 "data_size": 63488 00:13:40.366 } 00:13:40.366 ] 00:13:40.366 } 00:13:40.366 } 00:13:40.366 }' 00:13:40.366 16:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:40.366 16:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:40.366 BaseBdev2 00:13:40.366 BaseBdev3 00:13:40.366 BaseBdev4' 00:13:40.366 16:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:40.366 16:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:40.366 16:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:40.366 16:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:40.366 16:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:40.366 16:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.366 16:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.366 16:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.366 16:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:40.366 16:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:40.366 16:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:40.366 16:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:40.366 16:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:40.366 16:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.366 16:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.366 16:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.366 16:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:40.366 16:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:40.366 16:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:40.366 16:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:40.366 16:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.366 16:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.366 16:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:40.366 16:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.625 16:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:40.625 16:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:40.625 16:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:40.625 16:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:40.625 16:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:40.625 16:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.625 16:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.625 16:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.625 16:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:40.625 16:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:40.625 16:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:40.625 16:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.625 16:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.625 [2024-12-06 16:29:22.263555] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:40.625 [2024-12-06 16:29:22.263635] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:40.625 [2024-12-06 16:29:22.263759] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:40.625 [2024-12-06 16:29:22.264172] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:40.625 [2024-12-06 16:29:22.264286] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:13:40.625 16:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.625 16:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 85034 00:13:40.625 16:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85034 ']' 00:13:40.625 16:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 85034 00:13:40.625 16:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:40.625 16:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:40.625 16:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85034 00:13:40.625 16:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:40.625 16:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:40.626 16:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85034' 00:13:40.626 killing process with pid 85034 00:13:40.626 16:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 85034 00:13:40.626 [2024-12-06 16:29:22.308433] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:40.626 16:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 85034 00:13:40.626 [2024-12-06 16:29:22.351100] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:40.885 16:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:40.885 00:13:40.885 real 0m9.868s 00:13:40.885 user 0m16.937s 00:13:40.885 sys 0m2.082s 00:13:40.885 16:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:40.885 16:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.885 ************************************ 00:13:40.885 END TEST raid_state_function_test_sb 00:13:40.885 ************************************ 00:13:40.885 16:29:22 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:13:40.885 16:29:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:40.885 16:29:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:40.885 16:29:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:40.885 ************************************ 00:13:40.885 START TEST raid_superblock_test 00:13:40.885 ************************************ 00:13:40.885 16:29:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:13:40.885 16:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:13:40.885 16:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:13:40.885 16:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:40.885 16:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:40.885 16:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:40.885 16:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:40.885 16:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:40.885 16:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:40.885 16:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:40.885 16:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:40.885 16:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:40.885 16:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:40.885 16:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:40.885 16:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:13:40.885 16:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:13:40.885 16:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=85682 00:13:40.885 16:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:40.885 16:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 85682 00:13:40.885 16:29:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 85682 ']' 00:13:40.885 16:29:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.885 16:29:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:40.885 16:29:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.885 16:29:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:40.885 16:29:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.144 [2024-12-06 16:29:22.739837] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:13:41.144 [2024-12-06 16:29:22.739977] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85682 ] 00:13:41.144 [2024-12-06 16:29:22.914895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.144 [2024-12-06 16:29:22.945162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.403 [2024-12-06 16:29:22.990174] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:41.403 [2024-12-06 16:29:22.990222] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:41.971 16:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:41.971 16:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:13:41.971 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:41.971 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:41.971 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:41.971 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:41.971 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:41.971 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:41.971 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:41.971 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:41.971 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:41.971 16:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.971 16:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.971 malloc1 00:13:41.971 16:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.971 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:41.971 16:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.971 16:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.971 [2024-12-06 16:29:23.619229] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:41.971 [2024-12-06 16:29:23.619343] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.971 [2024-12-06 16:29:23.619392] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:41.971 [2024-12-06 16:29:23.619429] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.971 [2024-12-06 16:29:23.621858] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.971 [2024-12-06 16:29:23.621940] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:41.971 pt1 00:13:41.971 16:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.971 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:41.971 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:41.971 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:41.971 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:41.971 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:41.971 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:41.971 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:41.971 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:41.971 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:41.971 16:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.971 16:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.971 malloc2 00:13:41.971 16:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.971 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:41.971 16:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.971 16:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.971 [2024-12-06 16:29:23.652276] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:41.971 [2024-12-06 16:29:23.652345] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.971 [2024-12-06 16:29:23.652366] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:41.971 [2024-12-06 16:29:23.652378] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.971 [2024-12-06 16:29:23.654820] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.971 [2024-12-06 16:29:23.654929] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:41.971 pt2 00:13:41.971 16:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.971 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:41.971 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:41.971 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:41.971 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:41.971 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:41.971 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:41.972 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:41.972 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:41.972 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:41.972 16:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.972 16:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.972 malloc3 00:13:41.972 16:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.972 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:41.972 16:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.972 16:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.972 [2024-12-06 16:29:23.681269] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:41.972 [2024-12-06 16:29:23.681375] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.972 [2024-12-06 16:29:23.681414] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:41.972 [2024-12-06 16:29:23.681444] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.972 [2024-12-06 16:29:23.683800] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.972 [2024-12-06 16:29:23.683878] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:41.972 pt3 00:13:41.972 16:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.972 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:41.972 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:41.972 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:13:41.972 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:13:41.972 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:13:41.972 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:41.972 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:41.972 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:41.972 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:13:41.972 16:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.972 16:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.972 malloc4 00:13:41.972 16:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.972 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:41.972 16:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.972 16:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.972 [2024-12-06 16:29:23.727344] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:41.972 [2024-12-06 16:29:23.727472] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.972 [2024-12-06 16:29:23.727523] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:41.972 [2024-12-06 16:29:23.727571] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.972 [2024-12-06 16:29:23.729989] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.972 [2024-12-06 16:29:23.730074] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:41.972 pt4 00:13:41.972 16:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.972 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:41.972 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:41.972 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:13:41.972 16:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.972 16:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.972 [2024-12-06 16:29:23.739400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:41.972 [2024-12-06 16:29:23.741478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:41.972 [2024-12-06 16:29:23.741590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:41.972 [2024-12-06 16:29:23.741702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:41.972 [2024-12-06 16:29:23.741940] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:13:41.972 [2024-12-06 16:29:23.741995] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:41.972 [2024-12-06 16:29:23.742365] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:41.972 [2024-12-06 16:29:23.742576] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:13:41.972 [2024-12-06 16:29:23.742625] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:13:41.972 [2024-12-06 16:29:23.742843] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.972 16:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.972 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:41.972 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:41.972 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.972 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:41.972 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:41.972 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:41.972 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.972 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.972 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.972 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.972 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.972 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.972 16:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.972 16:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.972 16:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.972 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.972 "name": "raid_bdev1", 00:13:41.972 "uuid": "67591f5a-dc06-4734-9642-8c0679242325", 00:13:41.972 "strip_size_kb": 0, 00:13:41.972 "state": "online", 00:13:41.972 "raid_level": "raid1", 00:13:41.972 "superblock": true, 00:13:41.972 "num_base_bdevs": 4, 00:13:41.972 "num_base_bdevs_discovered": 4, 00:13:41.972 "num_base_bdevs_operational": 4, 00:13:41.972 "base_bdevs_list": [ 00:13:41.972 { 00:13:41.972 "name": "pt1", 00:13:41.972 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:41.972 "is_configured": true, 00:13:41.972 "data_offset": 2048, 00:13:41.972 "data_size": 63488 00:13:41.972 }, 00:13:41.972 { 00:13:41.972 "name": "pt2", 00:13:41.972 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:41.972 "is_configured": true, 00:13:41.972 "data_offset": 2048, 00:13:41.972 "data_size": 63488 00:13:41.972 }, 00:13:41.972 { 00:13:41.972 "name": "pt3", 00:13:41.972 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:41.972 "is_configured": true, 00:13:41.972 "data_offset": 2048, 00:13:41.972 "data_size": 63488 00:13:41.972 }, 00:13:41.972 { 00:13:41.972 "name": "pt4", 00:13:41.972 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:41.972 "is_configured": true, 00:13:41.972 "data_offset": 2048, 00:13:41.972 "data_size": 63488 00:13:41.972 } 00:13:41.972 ] 00:13:41.972 }' 00:13:41.972 16:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.972 16:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.540 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:42.540 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:42.540 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:42.540 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:42.540 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:42.540 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:42.540 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:42.540 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:42.540 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.540 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.540 [2024-12-06 16:29:24.242952] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:42.540 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.540 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:42.540 "name": "raid_bdev1", 00:13:42.540 "aliases": [ 00:13:42.540 "67591f5a-dc06-4734-9642-8c0679242325" 00:13:42.540 ], 00:13:42.540 "product_name": "Raid Volume", 00:13:42.540 "block_size": 512, 00:13:42.540 "num_blocks": 63488, 00:13:42.540 "uuid": "67591f5a-dc06-4734-9642-8c0679242325", 00:13:42.540 "assigned_rate_limits": { 00:13:42.540 "rw_ios_per_sec": 0, 00:13:42.540 "rw_mbytes_per_sec": 0, 00:13:42.540 "r_mbytes_per_sec": 0, 00:13:42.540 "w_mbytes_per_sec": 0 00:13:42.540 }, 00:13:42.540 "claimed": false, 00:13:42.540 "zoned": false, 00:13:42.540 "supported_io_types": { 00:13:42.540 "read": true, 00:13:42.540 "write": true, 00:13:42.540 "unmap": false, 00:13:42.540 "flush": false, 00:13:42.540 "reset": true, 00:13:42.540 "nvme_admin": false, 00:13:42.540 "nvme_io": false, 00:13:42.540 "nvme_io_md": false, 00:13:42.540 "write_zeroes": true, 00:13:42.540 "zcopy": false, 00:13:42.540 "get_zone_info": false, 00:13:42.540 "zone_management": false, 00:13:42.540 "zone_append": false, 00:13:42.540 "compare": false, 00:13:42.540 "compare_and_write": false, 00:13:42.540 "abort": false, 00:13:42.540 "seek_hole": false, 00:13:42.540 "seek_data": false, 00:13:42.540 "copy": false, 00:13:42.540 "nvme_iov_md": false 00:13:42.540 }, 00:13:42.540 "memory_domains": [ 00:13:42.540 { 00:13:42.540 "dma_device_id": "system", 00:13:42.540 "dma_device_type": 1 00:13:42.540 }, 00:13:42.540 { 00:13:42.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.540 "dma_device_type": 2 00:13:42.540 }, 00:13:42.540 { 00:13:42.540 "dma_device_id": "system", 00:13:42.540 "dma_device_type": 1 00:13:42.540 }, 00:13:42.540 { 00:13:42.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.540 "dma_device_type": 2 00:13:42.540 }, 00:13:42.540 { 00:13:42.540 "dma_device_id": "system", 00:13:42.540 "dma_device_type": 1 00:13:42.540 }, 00:13:42.540 { 00:13:42.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.540 "dma_device_type": 2 00:13:42.540 }, 00:13:42.540 { 00:13:42.540 "dma_device_id": "system", 00:13:42.540 "dma_device_type": 1 00:13:42.540 }, 00:13:42.540 { 00:13:42.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.540 "dma_device_type": 2 00:13:42.540 } 00:13:42.540 ], 00:13:42.540 "driver_specific": { 00:13:42.540 "raid": { 00:13:42.540 "uuid": "67591f5a-dc06-4734-9642-8c0679242325", 00:13:42.540 "strip_size_kb": 0, 00:13:42.540 "state": "online", 00:13:42.540 "raid_level": "raid1", 00:13:42.540 "superblock": true, 00:13:42.540 "num_base_bdevs": 4, 00:13:42.540 "num_base_bdevs_discovered": 4, 00:13:42.540 "num_base_bdevs_operational": 4, 00:13:42.540 "base_bdevs_list": [ 00:13:42.540 { 00:13:42.540 "name": "pt1", 00:13:42.540 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:42.540 "is_configured": true, 00:13:42.540 "data_offset": 2048, 00:13:42.540 "data_size": 63488 00:13:42.540 }, 00:13:42.540 { 00:13:42.540 "name": "pt2", 00:13:42.540 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:42.540 "is_configured": true, 00:13:42.540 "data_offset": 2048, 00:13:42.540 "data_size": 63488 00:13:42.540 }, 00:13:42.540 { 00:13:42.540 "name": "pt3", 00:13:42.540 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:42.540 "is_configured": true, 00:13:42.540 "data_offset": 2048, 00:13:42.540 "data_size": 63488 00:13:42.540 }, 00:13:42.540 { 00:13:42.540 "name": "pt4", 00:13:42.540 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:42.540 "is_configured": true, 00:13:42.540 "data_offset": 2048, 00:13:42.540 "data_size": 63488 00:13:42.540 } 00:13:42.540 ] 00:13:42.540 } 00:13:42.540 } 00:13:42.540 }' 00:13:42.541 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:42.541 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:42.541 pt2 00:13:42.541 pt3 00:13:42.541 pt4' 00:13:42.541 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:42.799 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:42.799 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:42.799 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:42.799 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.799 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.799 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:42.799 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.799 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:42.799 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:42.799 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:42.799 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:42.799 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:42.799 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.799 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.799 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.799 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:42.799 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:42.799 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:42.799 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:42.799 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:42.799 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.799 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.799 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.799 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:42.799 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:42.799 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:42.799 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:42.799 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.799 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.799 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:42.799 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.799 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:42.799 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:42.799 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:42.799 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:42.799 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.799 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.799 [2024-12-06 16:29:24.590385] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:42.799 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.799 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=67591f5a-dc06-4734-9642-8c0679242325 00:13:42.799 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 67591f5a-dc06-4734-9642-8c0679242325 ']' 00:13:42.799 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:42.799 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.799 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.059 [2024-12-06 16:29:24.641893] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:43.059 [2024-12-06 16:29:24.641934] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:43.059 [2024-12-06 16:29:24.642025] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:43.059 [2024-12-06 16:29:24.642122] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:43.059 [2024-12-06 16:29:24.642139] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.059 [2024-12-06 16:29:24.805665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:43.059 [2024-12-06 16:29:24.807888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:43.059 [2024-12-06 16:29:24.807955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:43.059 [2024-12-06 16:29:24.807990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:13:43.059 [2024-12-06 16:29:24.808048] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:43.059 [2024-12-06 16:29:24.808102] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:43.059 [2024-12-06 16:29:24.808125] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:43.059 [2024-12-06 16:29:24.808146] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:13:43.059 [2024-12-06 16:29:24.808163] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:43.059 [2024-12-06 16:29:24.808174] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:13:43.059 request: 00:13:43.059 { 00:13:43.059 "name": "raid_bdev1", 00:13:43.059 "raid_level": "raid1", 00:13:43.059 "base_bdevs": [ 00:13:43.059 "malloc1", 00:13:43.059 "malloc2", 00:13:43.059 "malloc3", 00:13:43.059 "malloc4" 00:13:43.059 ], 00:13:43.059 "superblock": false, 00:13:43.059 "method": "bdev_raid_create", 00:13:43.059 "req_id": 1 00:13:43.059 } 00:13:43.059 Got JSON-RPC error response 00:13:43.059 response: 00:13:43.059 { 00:13:43.059 "code": -17, 00:13:43.059 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:43.059 } 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.059 [2024-12-06 16:29:24.873483] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:43.059 [2024-12-06 16:29:24.873605] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:43.059 [2024-12-06 16:29:24.873649] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:43.059 [2024-12-06 16:29:24.873687] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:43.059 [2024-12-06 16:29:24.876060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:43.059 [2024-12-06 16:29:24.876139] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:43.059 [2024-12-06 16:29:24.876269] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:43.059 [2024-12-06 16:29:24.876351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:43.059 pt1 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:43.059 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.060 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.060 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:43.060 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.060 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.060 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.060 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.060 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.060 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.060 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.060 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.319 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.319 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.319 "name": "raid_bdev1", 00:13:43.319 "uuid": "67591f5a-dc06-4734-9642-8c0679242325", 00:13:43.319 "strip_size_kb": 0, 00:13:43.319 "state": "configuring", 00:13:43.319 "raid_level": "raid1", 00:13:43.319 "superblock": true, 00:13:43.319 "num_base_bdevs": 4, 00:13:43.319 "num_base_bdevs_discovered": 1, 00:13:43.319 "num_base_bdevs_operational": 4, 00:13:43.319 "base_bdevs_list": [ 00:13:43.319 { 00:13:43.319 "name": "pt1", 00:13:43.319 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:43.319 "is_configured": true, 00:13:43.319 "data_offset": 2048, 00:13:43.319 "data_size": 63488 00:13:43.319 }, 00:13:43.319 { 00:13:43.319 "name": null, 00:13:43.319 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:43.319 "is_configured": false, 00:13:43.319 "data_offset": 2048, 00:13:43.319 "data_size": 63488 00:13:43.319 }, 00:13:43.319 { 00:13:43.319 "name": null, 00:13:43.320 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:43.320 "is_configured": false, 00:13:43.320 "data_offset": 2048, 00:13:43.320 "data_size": 63488 00:13:43.320 }, 00:13:43.320 { 00:13:43.320 "name": null, 00:13:43.320 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:43.320 "is_configured": false, 00:13:43.320 "data_offset": 2048, 00:13:43.320 "data_size": 63488 00:13:43.320 } 00:13:43.320 ] 00:13:43.320 }' 00:13:43.320 16:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.320 16:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.579 16:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:13:43.579 16:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:43.579 16:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.579 16:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.579 [2024-12-06 16:29:25.320830] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:43.579 [2024-12-06 16:29:25.320970] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:43.579 [2024-12-06 16:29:25.321036] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:43.579 [2024-12-06 16:29:25.321082] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:43.579 [2024-12-06 16:29:25.321591] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:43.579 [2024-12-06 16:29:25.321657] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:43.579 [2024-12-06 16:29:25.321785] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:43.579 [2024-12-06 16:29:25.321842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:43.579 pt2 00:13:43.579 16:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.579 16:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:43.579 16:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.579 16:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.579 [2024-12-06 16:29:25.332813] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:43.579 16:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.579 16:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:13:43.579 16:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.579 16:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:43.579 16:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.579 16:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.579 16:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:43.579 16:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.579 16:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.579 16:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.579 16:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.579 16:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.579 16:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.579 16:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.579 16:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.579 16:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.579 16:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.579 "name": "raid_bdev1", 00:13:43.579 "uuid": "67591f5a-dc06-4734-9642-8c0679242325", 00:13:43.579 "strip_size_kb": 0, 00:13:43.579 "state": "configuring", 00:13:43.579 "raid_level": "raid1", 00:13:43.579 "superblock": true, 00:13:43.579 "num_base_bdevs": 4, 00:13:43.579 "num_base_bdevs_discovered": 1, 00:13:43.579 "num_base_bdevs_operational": 4, 00:13:43.579 "base_bdevs_list": [ 00:13:43.579 { 00:13:43.579 "name": "pt1", 00:13:43.579 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:43.579 "is_configured": true, 00:13:43.579 "data_offset": 2048, 00:13:43.579 "data_size": 63488 00:13:43.579 }, 00:13:43.579 { 00:13:43.579 "name": null, 00:13:43.579 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:43.579 "is_configured": false, 00:13:43.579 "data_offset": 0, 00:13:43.579 "data_size": 63488 00:13:43.579 }, 00:13:43.579 { 00:13:43.579 "name": null, 00:13:43.579 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:43.579 "is_configured": false, 00:13:43.579 "data_offset": 2048, 00:13:43.579 "data_size": 63488 00:13:43.579 }, 00:13:43.579 { 00:13:43.579 "name": null, 00:13:43.579 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:43.579 "is_configured": false, 00:13:43.579 "data_offset": 2048, 00:13:43.579 "data_size": 63488 00:13:43.579 } 00:13:43.579 ] 00:13:43.579 }' 00:13:43.579 16:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.579 16:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.148 16:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:44.148 16:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:44.148 16:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:44.148 16:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.148 16:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.148 [2024-12-06 16:29:25.776050] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:44.148 [2024-12-06 16:29:25.776195] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:44.148 [2024-12-06 16:29:25.776265] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:44.148 [2024-12-06 16:29:25.776326] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:44.148 [2024-12-06 16:29:25.776792] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:44.148 [2024-12-06 16:29:25.776860] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:44.148 [2024-12-06 16:29:25.776975] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:44.148 [2024-12-06 16:29:25.777036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:44.148 pt2 00:13:44.148 16:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.148 16:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:44.148 16:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:44.148 16:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:44.148 16:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.148 16:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.148 [2024-12-06 16:29:25.787996] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:44.148 [2024-12-06 16:29:25.788095] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:44.148 [2024-12-06 16:29:25.788133] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:44.148 [2024-12-06 16:29:25.788145] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:44.148 [2024-12-06 16:29:25.788547] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:44.148 [2024-12-06 16:29:25.788570] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:44.148 [2024-12-06 16:29:25.788637] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:44.148 [2024-12-06 16:29:25.788665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:44.148 pt3 00:13:44.148 16:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.148 16:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:44.148 16:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:44.148 16:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:44.148 16:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.148 16:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.148 [2024-12-06 16:29:25.799963] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:44.148 [2024-12-06 16:29:25.800022] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:44.148 [2024-12-06 16:29:25.800039] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:44.148 [2024-12-06 16:29:25.800051] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:44.148 [2024-12-06 16:29:25.800445] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:44.148 [2024-12-06 16:29:25.800475] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:44.148 [2024-12-06 16:29:25.800539] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:44.148 [2024-12-06 16:29:25.800571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:44.148 [2024-12-06 16:29:25.800710] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:13:44.148 [2024-12-06 16:29:25.800724] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:44.148 [2024-12-06 16:29:25.800996] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:44.148 [2024-12-06 16:29:25.801139] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:13:44.148 [2024-12-06 16:29:25.801150] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:13:44.148 [2024-12-06 16:29:25.801289] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:44.148 pt4 00:13:44.148 16:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.148 16:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:44.148 16:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:44.148 16:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:44.148 16:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:44.148 16:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:44.148 16:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:44.148 16:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:44.148 16:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:44.148 16:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.148 16:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.148 16:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.148 16:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.148 16:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.148 16:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.148 16:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.148 16:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.148 16:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.148 16:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.148 "name": "raid_bdev1", 00:13:44.148 "uuid": "67591f5a-dc06-4734-9642-8c0679242325", 00:13:44.148 "strip_size_kb": 0, 00:13:44.148 "state": "online", 00:13:44.148 "raid_level": "raid1", 00:13:44.148 "superblock": true, 00:13:44.148 "num_base_bdevs": 4, 00:13:44.148 "num_base_bdevs_discovered": 4, 00:13:44.148 "num_base_bdevs_operational": 4, 00:13:44.148 "base_bdevs_list": [ 00:13:44.148 { 00:13:44.148 "name": "pt1", 00:13:44.148 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:44.148 "is_configured": true, 00:13:44.148 "data_offset": 2048, 00:13:44.148 "data_size": 63488 00:13:44.148 }, 00:13:44.148 { 00:13:44.148 "name": "pt2", 00:13:44.148 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:44.148 "is_configured": true, 00:13:44.148 "data_offset": 2048, 00:13:44.148 "data_size": 63488 00:13:44.149 }, 00:13:44.149 { 00:13:44.149 "name": "pt3", 00:13:44.149 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:44.149 "is_configured": true, 00:13:44.149 "data_offset": 2048, 00:13:44.149 "data_size": 63488 00:13:44.149 }, 00:13:44.149 { 00:13:44.149 "name": "pt4", 00:13:44.149 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:44.149 "is_configured": true, 00:13:44.149 "data_offset": 2048, 00:13:44.149 "data_size": 63488 00:13:44.149 } 00:13:44.149 ] 00:13:44.149 }' 00:13:44.149 16:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.149 16:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.732 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:44.732 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:44.732 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:44.733 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:44.733 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:44.733 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:44.733 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:44.733 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:44.733 16:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.733 16:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.733 [2024-12-06 16:29:26.275651] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:44.733 16:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.733 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:44.733 "name": "raid_bdev1", 00:13:44.733 "aliases": [ 00:13:44.733 "67591f5a-dc06-4734-9642-8c0679242325" 00:13:44.733 ], 00:13:44.733 "product_name": "Raid Volume", 00:13:44.733 "block_size": 512, 00:13:44.733 "num_blocks": 63488, 00:13:44.733 "uuid": "67591f5a-dc06-4734-9642-8c0679242325", 00:13:44.733 "assigned_rate_limits": { 00:13:44.733 "rw_ios_per_sec": 0, 00:13:44.733 "rw_mbytes_per_sec": 0, 00:13:44.733 "r_mbytes_per_sec": 0, 00:13:44.733 "w_mbytes_per_sec": 0 00:13:44.733 }, 00:13:44.733 "claimed": false, 00:13:44.733 "zoned": false, 00:13:44.733 "supported_io_types": { 00:13:44.733 "read": true, 00:13:44.733 "write": true, 00:13:44.733 "unmap": false, 00:13:44.733 "flush": false, 00:13:44.733 "reset": true, 00:13:44.733 "nvme_admin": false, 00:13:44.733 "nvme_io": false, 00:13:44.733 "nvme_io_md": false, 00:13:44.733 "write_zeroes": true, 00:13:44.733 "zcopy": false, 00:13:44.733 "get_zone_info": false, 00:13:44.733 "zone_management": false, 00:13:44.733 "zone_append": false, 00:13:44.733 "compare": false, 00:13:44.733 "compare_and_write": false, 00:13:44.733 "abort": false, 00:13:44.734 "seek_hole": false, 00:13:44.735 "seek_data": false, 00:13:44.735 "copy": false, 00:13:44.735 "nvme_iov_md": false 00:13:44.735 }, 00:13:44.735 "memory_domains": [ 00:13:44.735 { 00:13:44.735 "dma_device_id": "system", 00:13:44.735 "dma_device_type": 1 00:13:44.735 }, 00:13:44.735 { 00:13:44.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.735 "dma_device_type": 2 00:13:44.735 }, 00:13:44.735 { 00:13:44.735 "dma_device_id": "system", 00:13:44.735 "dma_device_type": 1 00:13:44.735 }, 00:13:44.735 { 00:13:44.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.735 "dma_device_type": 2 00:13:44.735 }, 00:13:44.735 { 00:13:44.735 "dma_device_id": "system", 00:13:44.735 "dma_device_type": 1 00:13:44.735 }, 00:13:44.735 { 00:13:44.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.735 "dma_device_type": 2 00:13:44.735 }, 00:13:44.735 { 00:13:44.735 "dma_device_id": "system", 00:13:44.735 "dma_device_type": 1 00:13:44.735 }, 00:13:44.735 { 00:13:44.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.735 "dma_device_type": 2 00:13:44.735 } 00:13:44.735 ], 00:13:44.735 "driver_specific": { 00:13:44.735 "raid": { 00:13:44.735 "uuid": "67591f5a-dc06-4734-9642-8c0679242325", 00:13:44.735 "strip_size_kb": 0, 00:13:44.735 "state": "online", 00:13:44.735 "raid_level": "raid1", 00:13:44.735 "superblock": true, 00:13:44.735 "num_base_bdevs": 4, 00:13:44.735 "num_base_bdevs_discovered": 4, 00:13:44.735 "num_base_bdevs_operational": 4, 00:13:44.735 "base_bdevs_list": [ 00:13:44.735 { 00:13:44.735 "name": "pt1", 00:13:44.735 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:44.735 "is_configured": true, 00:13:44.735 "data_offset": 2048, 00:13:44.735 "data_size": 63488 00:13:44.735 }, 00:13:44.735 { 00:13:44.736 "name": "pt2", 00:13:44.736 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:44.736 "is_configured": true, 00:13:44.736 "data_offset": 2048, 00:13:44.736 "data_size": 63488 00:13:44.736 }, 00:13:44.736 { 00:13:44.736 "name": "pt3", 00:13:44.736 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:44.736 "is_configured": true, 00:13:44.736 "data_offset": 2048, 00:13:44.736 "data_size": 63488 00:13:44.736 }, 00:13:44.736 { 00:13:44.736 "name": "pt4", 00:13:44.736 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:44.736 "is_configured": true, 00:13:44.736 "data_offset": 2048, 00:13:44.736 "data_size": 63488 00:13:44.736 } 00:13:44.736 ] 00:13:44.736 } 00:13:44.736 } 00:13:44.736 }' 00:13:44.736 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:44.736 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:44.736 pt2 00:13:44.736 pt3 00:13:44.736 pt4' 00:13:44.736 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:44.736 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:44.736 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:44.736 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:44.736 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:44.736 16:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.736 16:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.736 16:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.737 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:44.737 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:44.737 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:44.737 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:44.737 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:44.737 16:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.737 16:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.737 16:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.737 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:44.737 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:44.737 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:44.737 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:44.737 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:44.737 16:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.737 16:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.737 16:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.737 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:44.737 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:44.737 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:44.737 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:44.737 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:44.737 16:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.738 16:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.738 16:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.738 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:44.738 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:44.738 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:44.738 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:44.738 16:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.738 16:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.738 [2024-12-06 16:29:26.559163] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:44.999 16:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.999 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 67591f5a-dc06-4734-9642-8c0679242325 '!=' 67591f5a-dc06-4734-9642-8c0679242325 ']' 00:13:44.999 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:13:44.999 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:44.999 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:44.999 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:13:44.999 16:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.999 16:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.999 [2024-12-06 16:29:26.602730] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:44.999 16:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.999 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:44.999 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:44.999 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:44.999 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:44.999 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:44.999 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:44.999 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.999 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.999 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.999 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.999 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.999 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.999 16:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.999 16:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.999 16:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.999 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.999 "name": "raid_bdev1", 00:13:44.999 "uuid": "67591f5a-dc06-4734-9642-8c0679242325", 00:13:44.999 "strip_size_kb": 0, 00:13:44.999 "state": "online", 00:13:44.999 "raid_level": "raid1", 00:13:44.999 "superblock": true, 00:13:44.999 "num_base_bdevs": 4, 00:13:44.999 "num_base_bdevs_discovered": 3, 00:13:44.999 "num_base_bdevs_operational": 3, 00:13:44.999 "base_bdevs_list": [ 00:13:44.999 { 00:13:44.999 "name": null, 00:13:44.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.999 "is_configured": false, 00:13:44.999 "data_offset": 0, 00:13:44.999 "data_size": 63488 00:13:44.999 }, 00:13:44.999 { 00:13:44.999 "name": "pt2", 00:13:44.999 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:44.999 "is_configured": true, 00:13:44.999 "data_offset": 2048, 00:13:44.999 "data_size": 63488 00:13:44.999 }, 00:13:44.999 { 00:13:44.999 "name": "pt3", 00:13:44.999 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:44.999 "is_configured": true, 00:13:44.999 "data_offset": 2048, 00:13:44.999 "data_size": 63488 00:13:44.999 }, 00:13:44.999 { 00:13:44.999 "name": "pt4", 00:13:44.999 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:44.999 "is_configured": true, 00:13:44.999 "data_offset": 2048, 00:13:44.999 "data_size": 63488 00:13:44.999 } 00:13:44.999 ] 00:13:44.999 }' 00:13:44.999 16:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.999 16:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.256 16:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:45.256 16:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.256 16:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.256 [2024-12-06 16:29:27.073909] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:45.256 [2024-12-06 16:29:27.074000] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:45.256 [2024-12-06 16:29:27.074114] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:45.256 [2024-12-06 16:29:27.074246] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:45.256 [2024-12-06 16:29:27.074312] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:13:45.256 16:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.256 16:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:13:45.256 16:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.256 16:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.256 16:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.513 16:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.513 16:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:13:45.513 16:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:13:45.513 16:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:13:45.513 16:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:45.513 16:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:13:45.513 16:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.513 16:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.513 16:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.513 16:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:45.514 16:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:45.514 16:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:13:45.514 16:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.514 16:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.514 16:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.514 16:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:45.514 16:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:45.514 16:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:13:45.514 16:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.514 16:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.514 16:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.514 16:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:45.514 16:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:45.514 16:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:13:45.514 16:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:45.514 16:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:45.514 16:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.514 16:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.514 [2024-12-06 16:29:27.169724] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:45.514 [2024-12-06 16:29:27.169843] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.514 [2024-12-06 16:29:27.169900] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:45.514 [2024-12-06 16:29:27.169939] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.514 [2024-12-06 16:29:27.172531] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.514 [2024-12-06 16:29:27.172622] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:45.514 [2024-12-06 16:29:27.172734] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:45.514 [2024-12-06 16:29:27.172806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:45.514 pt2 00:13:45.514 16:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.514 16:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:45.514 16:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:45.514 16:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:45.514 16:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:45.514 16:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:45.514 16:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:45.514 16:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.514 16:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.514 16:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.514 16:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.514 16:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.514 16:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.514 16:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.514 16:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.514 16:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.514 16:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.514 "name": "raid_bdev1", 00:13:45.514 "uuid": "67591f5a-dc06-4734-9642-8c0679242325", 00:13:45.514 "strip_size_kb": 0, 00:13:45.514 "state": "configuring", 00:13:45.514 "raid_level": "raid1", 00:13:45.514 "superblock": true, 00:13:45.514 "num_base_bdevs": 4, 00:13:45.514 "num_base_bdevs_discovered": 1, 00:13:45.514 "num_base_bdevs_operational": 3, 00:13:45.514 "base_bdevs_list": [ 00:13:45.514 { 00:13:45.514 "name": null, 00:13:45.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.514 "is_configured": false, 00:13:45.514 "data_offset": 2048, 00:13:45.514 "data_size": 63488 00:13:45.514 }, 00:13:45.514 { 00:13:45.514 "name": "pt2", 00:13:45.514 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:45.514 "is_configured": true, 00:13:45.514 "data_offset": 2048, 00:13:45.514 "data_size": 63488 00:13:45.514 }, 00:13:45.514 { 00:13:45.514 "name": null, 00:13:45.514 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:45.514 "is_configured": false, 00:13:45.514 "data_offset": 2048, 00:13:45.514 "data_size": 63488 00:13:45.514 }, 00:13:45.514 { 00:13:45.514 "name": null, 00:13:45.514 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:45.514 "is_configured": false, 00:13:45.514 "data_offset": 2048, 00:13:45.514 "data_size": 63488 00:13:45.514 } 00:13:45.514 ] 00:13:45.514 }' 00:13:45.514 16:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.514 16:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.080 16:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:46.080 16:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:46.080 16:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:46.080 16:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.080 16:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.080 [2024-12-06 16:29:27.664926] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:46.080 [2024-12-06 16:29:27.665012] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.080 [2024-12-06 16:29:27.665036] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:13:46.080 [2024-12-06 16:29:27.665052] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.080 [2024-12-06 16:29:27.665550] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.080 [2024-12-06 16:29:27.665590] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:46.080 [2024-12-06 16:29:27.665680] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:46.080 [2024-12-06 16:29:27.665708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:46.080 pt3 00:13:46.080 16:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.080 16:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:46.080 16:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:46.080 16:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:46.080 16:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:46.080 16:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:46.080 16:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:46.080 16:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.080 16:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.080 16:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.080 16:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.080 16:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.080 16:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.081 16:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.081 16:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.081 16:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.081 16:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.081 "name": "raid_bdev1", 00:13:46.081 "uuid": "67591f5a-dc06-4734-9642-8c0679242325", 00:13:46.081 "strip_size_kb": 0, 00:13:46.081 "state": "configuring", 00:13:46.081 "raid_level": "raid1", 00:13:46.081 "superblock": true, 00:13:46.081 "num_base_bdevs": 4, 00:13:46.081 "num_base_bdevs_discovered": 2, 00:13:46.081 "num_base_bdevs_operational": 3, 00:13:46.081 "base_bdevs_list": [ 00:13:46.081 { 00:13:46.081 "name": null, 00:13:46.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.081 "is_configured": false, 00:13:46.081 "data_offset": 2048, 00:13:46.081 "data_size": 63488 00:13:46.081 }, 00:13:46.081 { 00:13:46.081 "name": "pt2", 00:13:46.081 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:46.081 "is_configured": true, 00:13:46.081 "data_offset": 2048, 00:13:46.081 "data_size": 63488 00:13:46.081 }, 00:13:46.081 { 00:13:46.081 "name": "pt3", 00:13:46.081 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:46.081 "is_configured": true, 00:13:46.081 "data_offset": 2048, 00:13:46.081 "data_size": 63488 00:13:46.081 }, 00:13:46.081 { 00:13:46.081 "name": null, 00:13:46.081 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:46.081 "is_configured": false, 00:13:46.081 "data_offset": 2048, 00:13:46.081 "data_size": 63488 00:13:46.081 } 00:13:46.081 ] 00:13:46.081 }' 00:13:46.081 16:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.081 16:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.340 16:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:46.340 16:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:46.340 16:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:13:46.340 16:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:46.340 16:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.340 16:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.340 [2024-12-06 16:29:28.128183] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:46.340 [2024-12-06 16:29:28.128346] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.340 [2024-12-06 16:29:28.128395] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:13:46.340 [2024-12-06 16:29:28.128436] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.340 [2024-12-06 16:29:28.128922] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.340 [2024-12-06 16:29:28.129000] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:46.340 [2024-12-06 16:29:28.129120] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:46.340 [2024-12-06 16:29:28.129180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:46.340 [2024-12-06 16:29:28.129349] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:13:46.340 [2024-12-06 16:29:28.129397] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:46.340 [2024-12-06 16:29:28.129705] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:46.340 [2024-12-06 16:29:28.129900] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:13:46.340 [2024-12-06 16:29:28.129948] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:13:46.340 [2024-12-06 16:29:28.130128] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:46.340 pt4 00:13:46.340 16:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.340 16:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:46.340 16:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:46.340 16:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:46.340 16:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:46.340 16:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:46.340 16:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:46.340 16:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.340 16:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.340 16:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.340 16:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.340 16:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.340 16:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.340 16:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.340 16:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.340 16:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.599 16:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.599 "name": "raid_bdev1", 00:13:46.599 "uuid": "67591f5a-dc06-4734-9642-8c0679242325", 00:13:46.599 "strip_size_kb": 0, 00:13:46.599 "state": "online", 00:13:46.599 "raid_level": "raid1", 00:13:46.599 "superblock": true, 00:13:46.599 "num_base_bdevs": 4, 00:13:46.599 "num_base_bdevs_discovered": 3, 00:13:46.599 "num_base_bdevs_operational": 3, 00:13:46.599 "base_bdevs_list": [ 00:13:46.599 { 00:13:46.599 "name": null, 00:13:46.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.599 "is_configured": false, 00:13:46.599 "data_offset": 2048, 00:13:46.599 "data_size": 63488 00:13:46.599 }, 00:13:46.599 { 00:13:46.599 "name": "pt2", 00:13:46.599 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:46.599 "is_configured": true, 00:13:46.599 "data_offset": 2048, 00:13:46.599 "data_size": 63488 00:13:46.599 }, 00:13:46.599 { 00:13:46.599 "name": "pt3", 00:13:46.599 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:46.599 "is_configured": true, 00:13:46.599 "data_offset": 2048, 00:13:46.599 "data_size": 63488 00:13:46.599 }, 00:13:46.599 { 00:13:46.599 "name": "pt4", 00:13:46.599 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:46.599 "is_configured": true, 00:13:46.599 "data_offset": 2048, 00:13:46.599 "data_size": 63488 00:13:46.599 } 00:13:46.599 ] 00:13:46.599 }' 00:13:46.599 16:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.600 16:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.858 16:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:46.858 16:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.858 16:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.858 [2024-12-06 16:29:28.579503] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:46.858 [2024-12-06 16:29:28.579628] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:46.858 [2024-12-06 16:29:28.579731] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:46.858 [2024-12-06 16:29:28.579844] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:46.858 [2024-12-06 16:29:28.579858] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:13:46.858 16:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.858 16:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.858 16:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.858 16:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.858 16:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:13:46.858 16:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.858 16:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:13:46.858 16:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:13:46.858 16:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:13:46.858 16:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:13:46.858 16:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:13:46.858 16:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.858 16:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.858 16:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.858 16:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:46.858 16:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.858 16:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.858 [2024-12-06 16:29:28.647385] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:46.858 [2024-12-06 16:29:28.647465] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.858 [2024-12-06 16:29:28.647490] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:13:46.859 [2024-12-06 16:29:28.647501] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.859 [2024-12-06 16:29:28.650063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.859 [2024-12-06 16:29:28.650107] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:46.859 [2024-12-06 16:29:28.650199] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:46.859 [2024-12-06 16:29:28.650263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:46.859 [2024-12-06 16:29:28.650410] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:46.859 [2024-12-06 16:29:28.650477] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:46.859 [2024-12-06 16:29:28.650499] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:13:46.859 [2024-12-06 16:29:28.650547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:46.859 [2024-12-06 16:29:28.650666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:46.859 pt1 00:13:46.859 16:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.859 16:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:13:46.859 16:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:46.859 16:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:46.859 16:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:46.859 16:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:46.859 16:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:46.859 16:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:46.859 16:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.859 16:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.859 16:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.859 16:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.859 16:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.859 16:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.859 16:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.859 16:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.859 16:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.117 16:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.117 "name": "raid_bdev1", 00:13:47.117 "uuid": "67591f5a-dc06-4734-9642-8c0679242325", 00:13:47.117 "strip_size_kb": 0, 00:13:47.117 "state": "configuring", 00:13:47.117 "raid_level": "raid1", 00:13:47.117 "superblock": true, 00:13:47.117 "num_base_bdevs": 4, 00:13:47.117 "num_base_bdevs_discovered": 2, 00:13:47.117 "num_base_bdevs_operational": 3, 00:13:47.117 "base_bdevs_list": [ 00:13:47.117 { 00:13:47.117 "name": null, 00:13:47.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.117 "is_configured": false, 00:13:47.117 "data_offset": 2048, 00:13:47.117 "data_size": 63488 00:13:47.117 }, 00:13:47.117 { 00:13:47.117 "name": "pt2", 00:13:47.117 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:47.117 "is_configured": true, 00:13:47.117 "data_offset": 2048, 00:13:47.117 "data_size": 63488 00:13:47.117 }, 00:13:47.117 { 00:13:47.117 "name": "pt3", 00:13:47.117 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:47.117 "is_configured": true, 00:13:47.117 "data_offset": 2048, 00:13:47.117 "data_size": 63488 00:13:47.117 }, 00:13:47.117 { 00:13:47.117 "name": null, 00:13:47.117 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:47.117 "is_configured": false, 00:13:47.117 "data_offset": 2048, 00:13:47.117 "data_size": 63488 00:13:47.117 } 00:13:47.117 ] 00:13:47.117 }' 00:13:47.117 16:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.117 16:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.376 16:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:13:47.376 16:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.376 16:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:47.376 16:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.376 16:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.376 16:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:13:47.376 16:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:47.376 16:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.376 16:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.376 [2024-12-06 16:29:29.162553] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:47.376 [2024-12-06 16:29:29.162718] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.376 [2024-12-06 16:29:29.162772] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:13:47.376 [2024-12-06 16:29:29.162812] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.376 [2024-12-06 16:29:29.163337] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.376 [2024-12-06 16:29:29.163420] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:47.376 [2024-12-06 16:29:29.163551] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:47.376 [2024-12-06 16:29:29.163618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:47.376 [2024-12-06 16:29:29.163770] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:13:47.376 [2024-12-06 16:29:29.163817] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:47.376 [2024-12-06 16:29:29.164127] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:47.376 [2024-12-06 16:29:29.164326] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:13:47.376 [2024-12-06 16:29:29.164373] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:13:47.376 [2024-12-06 16:29:29.164515] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:47.376 pt4 00:13:47.376 16:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.376 16:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:47.377 16:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:47.377 16:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:47.377 16:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:47.377 16:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:47.377 16:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:47.377 16:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.377 16:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.377 16:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.377 16:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.377 16:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.377 16:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.377 16:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.377 16:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.377 16:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.635 16:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.635 "name": "raid_bdev1", 00:13:47.635 "uuid": "67591f5a-dc06-4734-9642-8c0679242325", 00:13:47.635 "strip_size_kb": 0, 00:13:47.635 "state": "online", 00:13:47.635 "raid_level": "raid1", 00:13:47.635 "superblock": true, 00:13:47.635 "num_base_bdevs": 4, 00:13:47.635 "num_base_bdevs_discovered": 3, 00:13:47.635 "num_base_bdevs_operational": 3, 00:13:47.635 "base_bdevs_list": [ 00:13:47.635 { 00:13:47.635 "name": null, 00:13:47.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.635 "is_configured": false, 00:13:47.635 "data_offset": 2048, 00:13:47.635 "data_size": 63488 00:13:47.635 }, 00:13:47.635 { 00:13:47.635 "name": "pt2", 00:13:47.635 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:47.635 "is_configured": true, 00:13:47.635 "data_offset": 2048, 00:13:47.635 "data_size": 63488 00:13:47.635 }, 00:13:47.635 { 00:13:47.635 "name": "pt3", 00:13:47.635 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:47.635 "is_configured": true, 00:13:47.635 "data_offset": 2048, 00:13:47.635 "data_size": 63488 00:13:47.635 }, 00:13:47.635 { 00:13:47.635 "name": "pt4", 00:13:47.635 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:47.636 "is_configured": true, 00:13:47.636 "data_offset": 2048, 00:13:47.636 "data_size": 63488 00:13:47.636 } 00:13:47.636 ] 00:13:47.636 }' 00:13:47.636 16:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.636 16:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.893 16:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:47.893 16:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:47.893 16:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.893 16:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.893 16:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.893 16:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:13:47.893 16:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:13:47.893 16:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:47.893 16:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.893 16:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.893 [2024-12-06 16:29:29.674069] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:47.893 16:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.893 16:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 67591f5a-dc06-4734-9642-8c0679242325 '!=' 67591f5a-dc06-4734-9642-8c0679242325 ']' 00:13:47.893 16:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 85682 00:13:47.893 16:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 85682 ']' 00:13:47.893 16:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 85682 00:13:47.893 16:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:13:47.893 16:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:47.893 16:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85682 00:13:47.893 16:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:47.893 16:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:47.893 16:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85682' 00:13:47.893 killing process with pid 85682 00:13:47.893 16:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 85682 00:13:47.893 [2024-12-06 16:29:29.724464] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:47.893 [2024-12-06 16:29:29.724623] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:47.893 16:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 85682 00:13:47.893 [2024-12-06 16:29:29.724761] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:47.893 [2024-12-06 16:29:29.724816] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:13:48.151 [2024-12-06 16:29:29.772934] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:48.409 ************************************ 00:13:48.409 END TEST raid_superblock_test 00:13:48.409 ************************************ 00:13:48.409 16:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:48.409 00:13:48.409 real 0m7.357s 00:13:48.409 user 0m12.401s 00:13:48.409 sys 0m1.556s 00:13:48.409 16:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:48.409 16:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.409 16:29:30 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:13:48.409 16:29:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:48.409 16:29:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:48.409 16:29:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:48.409 ************************************ 00:13:48.409 START TEST raid_read_error_test 00:13:48.409 ************************************ 00:13:48.409 16:29:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:13:48.409 16:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:48.409 16:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:48.409 16:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:48.409 16:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:48.409 16:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:48.409 16:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:48.409 16:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:48.409 16:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:48.409 16:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:48.409 16:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:48.409 16:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:48.409 16:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:48.409 16:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:48.409 16:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:48.409 16:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:48.409 16:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:48.409 16:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:48.409 16:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:48.409 16:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:48.409 16:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:48.409 16:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:48.409 16:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:48.409 16:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:48.409 16:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:48.409 16:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:48.409 16:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:48.409 16:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:48.409 16:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.9nULHTKuKU 00:13:48.409 16:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:48.409 16:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=86164 00:13:48.409 16:29:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 86164 00:13:48.409 16:29:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 86164 ']' 00:13:48.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.409 16:29:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.409 16:29:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:48.409 16:29:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.409 16:29:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:48.409 16:29:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.409 [2024-12-06 16:29:30.174965] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:13:48.410 [2024-12-06 16:29:30.175190] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86164 ] 00:13:48.667 [2024-12-06 16:29:30.334712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.667 [2024-12-06 16:29:30.366159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.667 [2024-12-06 16:29:30.410993] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:48.667 [2024-12-06 16:29:30.411034] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:49.616 16:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:49.616 16:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:49.616 16:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:49.616 16:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:49.616 16:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.616 16:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.616 BaseBdev1_malloc 00:13:49.616 16:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.616 16:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:49.616 16:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.616 16:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.616 true 00:13:49.616 16:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.616 16:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:49.616 16:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.616 16:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.616 [2024-12-06 16:29:31.116732] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:49.616 [2024-12-06 16:29:31.116820] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:49.616 [2024-12-06 16:29:31.116860] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:49.616 [2024-12-06 16:29:31.116879] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:49.616 [2024-12-06 16:29:31.119403] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:49.616 [2024-12-06 16:29:31.119453] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:49.616 BaseBdev1 00:13:49.616 16:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.616 16:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:49.616 16:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:49.616 16:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.616 16:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.616 BaseBdev2_malloc 00:13:49.616 16:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.616 16:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:49.616 16:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.616 16:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.616 true 00:13:49.616 16:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.616 16:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:49.616 16:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.616 16:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.616 [2024-12-06 16:29:31.155162] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:49.616 [2024-12-06 16:29:31.155318] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:49.616 [2024-12-06 16:29:31.155360] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:49.616 [2024-12-06 16:29:31.155374] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:49.616 [2024-12-06 16:29:31.158150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:49.616 [2024-12-06 16:29:31.158216] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:49.616 BaseBdev2 00:13:49.616 16:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.616 16:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:49.616 16:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:49.616 16:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.616 16:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.616 BaseBdev3_malloc 00:13:49.616 16:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.616 16:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:49.616 16:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.616 16:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.616 true 00:13:49.616 16:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.616 16:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:49.616 16:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.616 16:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.616 [2024-12-06 16:29:31.192733] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:49.617 [2024-12-06 16:29:31.192794] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:49.617 [2024-12-06 16:29:31.192828] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:49.617 [2024-12-06 16:29:31.192839] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:49.617 [2024-12-06 16:29:31.195461] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:49.617 [2024-12-06 16:29:31.195504] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:49.617 BaseBdev3 00:13:49.617 16:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.617 16:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:49.617 16:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:49.617 16:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.617 16:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.617 BaseBdev4_malloc 00:13:49.617 16:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.617 16:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:49.617 16:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.617 16:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.617 true 00:13:49.617 16:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.617 16:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:49.617 16:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.617 16:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.617 [2024-12-06 16:29:31.241148] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:49.617 [2024-12-06 16:29:31.241268] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:49.617 [2024-12-06 16:29:31.241303] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:49.617 [2024-12-06 16:29:31.241314] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:49.617 [2024-12-06 16:29:31.243845] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:49.617 [2024-12-06 16:29:31.243890] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:49.617 BaseBdev4 00:13:49.617 16:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.617 16:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:49.617 16:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.617 16:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.617 [2024-12-06 16:29:31.249175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:49.617 [2024-12-06 16:29:31.251419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:49.617 [2024-12-06 16:29:31.251521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:49.617 [2024-12-06 16:29:31.251610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:49.617 [2024-12-06 16:29:31.251859] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:13:49.617 [2024-12-06 16:29:31.251874] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:49.617 [2024-12-06 16:29:31.252251] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:13:49.617 [2024-12-06 16:29:31.252429] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:13:49.617 [2024-12-06 16:29:31.252456] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:13:49.617 [2024-12-06 16:29:31.252667] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:49.617 16:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.617 16:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:49.617 16:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:49.617 16:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:49.617 16:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:49.617 16:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:49.617 16:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:49.617 16:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.617 16:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.617 16:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.617 16:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.617 16:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.617 16:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.617 16:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.617 16:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.617 16:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.617 16:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.617 "name": "raid_bdev1", 00:13:49.617 "uuid": "89f89cb4-695a-42b0-a51b-d46e4d1f57ed", 00:13:49.617 "strip_size_kb": 0, 00:13:49.617 "state": "online", 00:13:49.617 "raid_level": "raid1", 00:13:49.617 "superblock": true, 00:13:49.617 "num_base_bdevs": 4, 00:13:49.617 "num_base_bdevs_discovered": 4, 00:13:49.617 "num_base_bdevs_operational": 4, 00:13:49.617 "base_bdevs_list": [ 00:13:49.617 { 00:13:49.617 "name": "BaseBdev1", 00:13:49.617 "uuid": "60ea3e1b-1ed3-50ad-9b54-b6c2302ae570", 00:13:49.617 "is_configured": true, 00:13:49.617 "data_offset": 2048, 00:13:49.617 "data_size": 63488 00:13:49.617 }, 00:13:49.617 { 00:13:49.617 "name": "BaseBdev2", 00:13:49.617 "uuid": "2f995856-e54b-5df2-b122-338b8d854d4d", 00:13:49.617 "is_configured": true, 00:13:49.617 "data_offset": 2048, 00:13:49.617 "data_size": 63488 00:13:49.617 }, 00:13:49.617 { 00:13:49.617 "name": "BaseBdev3", 00:13:49.617 "uuid": "5157f90e-6362-5910-a2cb-abc9c428a2d4", 00:13:49.617 "is_configured": true, 00:13:49.617 "data_offset": 2048, 00:13:49.617 "data_size": 63488 00:13:49.617 }, 00:13:49.617 { 00:13:49.617 "name": "BaseBdev4", 00:13:49.617 "uuid": "d9afe3da-7130-59b9-abdc-b29401abb15b", 00:13:49.617 "is_configured": true, 00:13:49.617 "data_offset": 2048, 00:13:49.617 "data_size": 63488 00:13:49.617 } 00:13:49.617 ] 00:13:49.617 }' 00:13:49.617 16:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.617 16:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.199 16:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:50.199 16:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:50.199 [2024-12-06 16:29:31.820661] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:51.135 16:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:51.135 16:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.135 16:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.135 16:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.135 16:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:51.135 16:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:51.135 16:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:13:51.135 16:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:51.135 16:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:51.135 16:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:51.135 16:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:51.135 16:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:51.135 16:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:51.135 16:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:51.135 16:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.135 16:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.135 16:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.135 16:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.135 16:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.135 16:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.135 16:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.135 16:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.135 16:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.135 16:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.135 "name": "raid_bdev1", 00:13:51.135 "uuid": "89f89cb4-695a-42b0-a51b-d46e4d1f57ed", 00:13:51.135 "strip_size_kb": 0, 00:13:51.135 "state": "online", 00:13:51.136 "raid_level": "raid1", 00:13:51.136 "superblock": true, 00:13:51.136 "num_base_bdevs": 4, 00:13:51.136 "num_base_bdevs_discovered": 4, 00:13:51.136 "num_base_bdevs_operational": 4, 00:13:51.136 "base_bdevs_list": [ 00:13:51.136 { 00:13:51.136 "name": "BaseBdev1", 00:13:51.136 "uuid": "60ea3e1b-1ed3-50ad-9b54-b6c2302ae570", 00:13:51.136 "is_configured": true, 00:13:51.136 "data_offset": 2048, 00:13:51.136 "data_size": 63488 00:13:51.136 }, 00:13:51.136 { 00:13:51.136 "name": "BaseBdev2", 00:13:51.136 "uuid": "2f995856-e54b-5df2-b122-338b8d854d4d", 00:13:51.136 "is_configured": true, 00:13:51.136 "data_offset": 2048, 00:13:51.136 "data_size": 63488 00:13:51.136 }, 00:13:51.136 { 00:13:51.136 "name": "BaseBdev3", 00:13:51.136 "uuid": "5157f90e-6362-5910-a2cb-abc9c428a2d4", 00:13:51.136 "is_configured": true, 00:13:51.136 "data_offset": 2048, 00:13:51.136 "data_size": 63488 00:13:51.136 }, 00:13:51.136 { 00:13:51.136 "name": "BaseBdev4", 00:13:51.136 "uuid": "d9afe3da-7130-59b9-abdc-b29401abb15b", 00:13:51.136 "is_configured": true, 00:13:51.136 "data_offset": 2048, 00:13:51.136 "data_size": 63488 00:13:51.136 } 00:13:51.136 ] 00:13:51.136 }' 00:13:51.136 16:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.136 16:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.395 16:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:51.395 16:29:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.395 16:29:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.395 [2024-12-06 16:29:33.216584] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:51.395 [2024-12-06 16:29:33.216710] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:51.395 [2024-12-06 16:29:33.219993] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:51.395 [2024-12-06 16:29:33.220094] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:51.395 [2024-12-06 16:29:33.220306] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:51.395 [2024-12-06 16:29:33.220383] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:13:51.395 { 00:13:51.395 "results": [ 00:13:51.395 { 00:13:51.395 "job": "raid_bdev1", 00:13:51.395 "core_mask": "0x1", 00:13:51.395 "workload": "randrw", 00:13:51.395 "percentage": 50, 00:13:51.395 "status": "finished", 00:13:51.395 "queue_depth": 1, 00:13:51.395 "io_size": 131072, 00:13:51.395 "runtime": 1.396921, 00:13:51.395 "iops": 10272.59236563843, 00:13:51.395 "mibps": 1284.0740457048037, 00:13:51.395 "io_failed": 0, 00:13:51.395 "io_timeout": 0, 00:13:51.395 "avg_latency_us": 94.3945536265843, 00:13:51.395 "min_latency_us": 24.593886462882097, 00:13:51.395 "max_latency_us": 1581.1633187772925 00:13:51.395 } 00:13:51.395 ], 00:13:51.395 "core_count": 1 00:13:51.395 } 00:13:51.395 16:29:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.395 16:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 86164 00:13:51.395 16:29:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 86164 ']' 00:13:51.395 16:29:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 86164 00:13:51.395 16:29:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:13:51.395 16:29:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:51.652 16:29:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86164 00:13:51.652 16:29:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:51.652 16:29:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:51.652 killing process with pid 86164 00:13:51.652 16:29:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86164' 00:13:51.652 16:29:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 86164 00:13:51.652 [2024-12-06 16:29:33.266167] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:51.652 16:29:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 86164 00:13:51.652 [2024-12-06 16:29:33.303343] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:51.927 16:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.9nULHTKuKU 00:13:51.927 16:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:51.927 16:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:51.927 16:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:51.927 16:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:51.927 16:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:51.927 16:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:51.927 16:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:51.927 00:13:51.927 real 0m3.468s 00:13:51.927 user 0m4.409s 00:13:51.927 sys 0m0.580s 00:13:51.927 16:29:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:51.927 ************************************ 00:13:51.927 END TEST raid_read_error_test 00:13:51.927 ************************************ 00:13:51.927 16:29:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.927 16:29:33 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:13:51.927 16:29:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:51.927 16:29:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:51.927 16:29:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:51.927 ************************************ 00:13:51.927 START TEST raid_write_error_test 00:13:51.927 ************************************ 00:13:51.927 16:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:13:51.927 16:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:51.927 16:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:51.927 16:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:51.927 16:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:51.927 16:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:51.927 16:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:51.927 16:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:51.927 16:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:51.927 16:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:51.927 16:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:51.927 16:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:51.927 16:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:51.927 16:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:51.927 16:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:51.927 16:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:51.927 16:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:51.927 16:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:51.927 16:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:51.927 16:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:51.927 16:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:51.927 16:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:51.927 16:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:51.927 16:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:51.927 16:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:51.927 16:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:51.927 16:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:51.927 16:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:51.927 16:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.lDEc1B4aPQ 00:13:51.927 16:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=86293 00:13:51.927 16:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:51.927 16:29:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 86293 00:13:51.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.927 16:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 86293 ']' 00:13:51.927 16:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.927 16:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:51.927 16:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.927 16:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:51.927 16:29:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.927 [2024-12-06 16:29:33.703791] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:13:51.927 [2024-12-06 16:29:33.703911] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86293 ] 00:13:52.184 [2024-12-06 16:29:33.859977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.184 [2024-12-06 16:29:33.900105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.184 [2024-12-06 16:29:33.952250] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:52.184 [2024-12-06 16:29:33.952289] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:52.750 16:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:52.750 16:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:52.750 16:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:52.750 16:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:52.750 16:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.750 16:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.009 BaseBdev1_malloc 00:13:53.009 16:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.009 16:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:53.009 16:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.009 16:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.009 true 00:13:53.009 16:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.009 16:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:53.009 16:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.009 16:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.009 [2024-12-06 16:29:34.621508] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:53.009 [2024-12-06 16:29:34.621564] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.009 [2024-12-06 16:29:34.621598] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:53.009 [2024-12-06 16:29:34.621609] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.009 [2024-12-06 16:29:34.623973] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.009 [2024-12-06 16:29:34.624072] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:53.009 BaseBdev1 00:13:53.009 16:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.009 16:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:53.009 16:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:53.009 16:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.009 16:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.009 BaseBdev2_malloc 00:13:53.009 16:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.009 16:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:53.009 16:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.009 16:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.009 true 00:13:53.009 16:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.009 16:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:53.009 16:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.009 16:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.009 [2024-12-06 16:29:34.662504] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:53.009 [2024-12-06 16:29:34.662600] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.009 [2024-12-06 16:29:34.662624] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:53.009 [2024-12-06 16:29:34.662633] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.009 [2024-12-06 16:29:34.664957] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.009 [2024-12-06 16:29:34.664997] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:53.009 BaseBdev2 00:13:53.009 16:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.009 16:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:53.009 16:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:53.009 16:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.009 16:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.009 BaseBdev3_malloc 00:13:53.009 16:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.009 16:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:53.009 16:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.009 16:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.009 true 00:13:53.009 16:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.009 16:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:53.009 16:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.009 16:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.009 [2024-12-06 16:29:34.703675] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:53.009 [2024-12-06 16:29:34.703725] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.009 [2024-12-06 16:29:34.703744] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:53.009 [2024-12-06 16:29:34.703753] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.009 [2024-12-06 16:29:34.705987] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.009 [2024-12-06 16:29:34.706099] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:53.009 BaseBdev3 00:13:53.009 16:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.009 16:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:53.009 16:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:53.009 16:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.009 16:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.009 BaseBdev4_malloc 00:13:53.009 16:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.009 16:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:53.009 16:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.009 16:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.009 true 00:13:53.009 16:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.009 16:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:53.009 16:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.009 16:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.009 [2024-12-06 16:29:34.755449] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:53.009 [2024-12-06 16:29:34.755501] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.009 [2024-12-06 16:29:34.755525] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:53.009 [2024-12-06 16:29:34.755535] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.009 [2024-12-06 16:29:34.757757] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.009 [2024-12-06 16:29:34.757855] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:53.009 BaseBdev4 00:13:53.009 16:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.009 16:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:53.009 16:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.009 16:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.009 [2024-12-06 16:29:34.767487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:53.009 [2024-12-06 16:29:34.769526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:53.009 [2024-12-06 16:29:34.769616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:53.009 [2024-12-06 16:29:34.769672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:53.009 [2024-12-06 16:29:34.769870] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:13:53.009 [2024-12-06 16:29:34.769882] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:53.009 [2024-12-06 16:29:34.770144] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:13:53.009 [2024-12-06 16:29:34.770297] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:13:53.010 [2024-12-06 16:29:34.770319] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:13:53.010 [2024-12-06 16:29:34.770448] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:53.010 16:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.010 16:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:53.010 16:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:53.010 16:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:53.010 16:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:53.010 16:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:53.010 16:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:53.010 16:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.010 16:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.010 16:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.010 16:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.010 16:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.010 16:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.010 16:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.010 16:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.010 16:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.010 16:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.010 "name": "raid_bdev1", 00:13:53.010 "uuid": "369c193b-9118-46be-bf79-07bfc5c368f7", 00:13:53.010 "strip_size_kb": 0, 00:13:53.010 "state": "online", 00:13:53.010 "raid_level": "raid1", 00:13:53.010 "superblock": true, 00:13:53.010 "num_base_bdevs": 4, 00:13:53.010 "num_base_bdevs_discovered": 4, 00:13:53.010 "num_base_bdevs_operational": 4, 00:13:53.010 "base_bdevs_list": [ 00:13:53.010 { 00:13:53.010 "name": "BaseBdev1", 00:13:53.010 "uuid": "58d3577d-d685-50e9-8e40-53956f511f99", 00:13:53.010 "is_configured": true, 00:13:53.010 "data_offset": 2048, 00:13:53.010 "data_size": 63488 00:13:53.010 }, 00:13:53.010 { 00:13:53.010 "name": "BaseBdev2", 00:13:53.010 "uuid": "a7db6725-20fe-5a4b-aa1d-871a5701da22", 00:13:53.010 "is_configured": true, 00:13:53.010 "data_offset": 2048, 00:13:53.010 "data_size": 63488 00:13:53.010 }, 00:13:53.010 { 00:13:53.010 "name": "BaseBdev3", 00:13:53.010 "uuid": "bb29e352-e12c-5fa0-8b99-0833d4409e9e", 00:13:53.010 "is_configured": true, 00:13:53.010 "data_offset": 2048, 00:13:53.010 "data_size": 63488 00:13:53.010 }, 00:13:53.010 { 00:13:53.010 "name": "BaseBdev4", 00:13:53.010 "uuid": "723d5fd8-6bc9-5035-8fa6-12645c2fe706", 00:13:53.010 "is_configured": true, 00:13:53.010 "data_offset": 2048, 00:13:53.010 "data_size": 63488 00:13:53.010 } 00:13:53.010 ] 00:13:53.010 }' 00:13:53.010 16:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.010 16:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.577 16:29:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:53.577 16:29:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:53.577 [2024-12-06 16:29:35.326949] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:54.532 16:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:54.532 16:29:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.532 16:29:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.532 [2024-12-06 16:29:36.239317] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:13:54.532 [2024-12-06 16:29:36.239390] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:54.532 [2024-12-06 16:29:36.239660] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000068a0 00:13:54.532 16:29:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.532 16:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:54.532 16:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:54.532 16:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:13:54.532 16:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:13:54.532 16:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:54.532 16:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:54.532 16:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:54.532 16:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:54.532 16:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:54.532 16:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:54.532 16:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.532 16:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.532 16:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.532 16:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.532 16:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.532 16:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.532 16:29:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.532 16:29:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.532 16:29:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.532 16:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.532 "name": "raid_bdev1", 00:13:54.532 "uuid": "369c193b-9118-46be-bf79-07bfc5c368f7", 00:13:54.532 "strip_size_kb": 0, 00:13:54.532 "state": "online", 00:13:54.532 "raid_level": "raid1", 00:13:54.532 "superblock": true, 00:13:54.532 "num_base_bdevs": 4, 00:13:54.532 "num_base_bdevs_discovered": 3, 00:13:54.532 "num_base_bdevs_operational": 3, 00:13:54.532 "base_bdevs_list": [ 00:13:54.532 { 00:13:54.532 "name": null, 00:13:54.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.532 "is_configured": false, 00:13:54.532 "data_offset": 0, 00:13:54.532 "data_size": 63488 00:13:54.532 }, 00:13:54.532 { 00:13:54.532 "name": "BaseBdev2", 00:13:54.532 "uuid": "a7db6725-20fe-5a4b-aa1d-871a5701da22", 00:13:54.532 "is_configured": true, 00:13:54.532 "data_offset": 2048, 00:13:54.532 "data_size": 63488 00:13:54.532 }, 00:13:54.532 { 00:13:54.533 "name": "BaseBdev3", 00:13:54.533 "uuid": "bb29e352-e12c-5fa0-8b99-0833d4409e9e", 00:13:54.533 "is_configured": true, 00:13:54.533 "data_offset": 2048, 00:13:54.533 "data_size": 63488 00:13:54.533 }, 00:13:54.533 { 00:13:54.533 "name": "BaseBdev4", 00:13:54.533 "uuid": "723d5fd8-6bc9-5035-8fa6-12645c2fe706", 00:13:54.533 "is_configured": true, 00:13:54.533 "data_offset": 2048, 00:13:54.533 "data_size": 63488 00:13:54.533 } 00:13:54.533 ] 00:13:54.533 }' 00:13:54.533 16:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.533 16:29:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.103 16:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:55.103 16:29:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.103 16:29:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.103 [2024-12-06 16:29:36.748036] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:55.103 [2024-12-06 16:29:36.748074] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:55.103 [2024-12-06 16:29:36.751010] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:55.103 [2024-12-06 16:29:36.751063] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:55.103 [2024-12-06 16:29:36.751159] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:55.103 [2024-12-06 16:29:36.751172] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:13:55.103 { 00:13:55.103 "results": [ 00:13:55.103 { 00:13:55.103 "job": "raid_bdev1", 00:13:55.103 "core_mask": "0x1", 00:13:55.103 "workload": "randrw", 00:13:55.103 "percentage": 50, 00:13:55.103 "status": "finished", 00:13:55.103 "queue_depth": 1, 00:13:55.103 "io_size": 131072, 00:13:55.103 "runtime": 1.421564, 00:13:55.103 "iops": 11552.768640736542, 00:13:55.103 "mibps": 1444.0960800920677, 00:13:55.103 "io_failed": 0, 00:13:55.103 "io_timeout": 0, 00:13:55.103 "avg_latency_us": 83.68303585316896, 00:13:55.103 "min_latency_us": 24.370305676855896, 00:13:55.103 "max_latency_us": 1709.9458515283843 00:13:55.103 } 00:13:55.103 ], 00:13:55.103 "core_count": 1 00:13:55.103 } 00:13:55.103 16:29:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.103 16:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 86293 00:13:55.103 16:29:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 86293 ']' 00:13:55.103 16:29:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 86293 00:13:55.103 16:29:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:13:55.103 16:29:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:55.103 16:29:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86293 00:13:55.103 16:29:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:55.103 16:29:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:55.103 16:29:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86293' 00:13:55.103 killing process with pid 86293 00:13:55.103 16:29:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 86293 00:13:55.103 [2024-12-06 16:29:36.787940] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:55.104 16:29:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 86293 00:13:55.104 [2024-12-06 16:29:36.824473] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:55.364 16:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.lDEc1B4aPQ 00:13:55.364 16:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:55.364 16:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:55.364 16:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:55.364 16:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:55.364 16:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:55.364 16:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:55.364 16:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:55.364 00:13:55.364 real 0m3.438s 00:13:55.364 user 0m4.403s 00:13:55.364 sys 0m0.556s 00:13:55.364 ************************************ 00:13:55.364 END TEST raid_write_error_test 00:13:55.364 ************************************ 00:13:55.364 16:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:55.364 16:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.364 16:29:37 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:13:55.364 16:29:37 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:55.364 16:29:37 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:13:55.364 16:29:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:55.364 16:29:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:55.364 16:29:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:55.364 ************************************ 00:13:55.364 START TEST raid_rebuild_test 00:13:55.364 ************************************ 00:13:55.364 16:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:13:55.364 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:55.364 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:55.364 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:55.364 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:55.364 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:55.364 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:55.364 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:55.364 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:55.364 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:55.364 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:55.364 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:55.364 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:55.364 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:55.364 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:55.364 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:55.364 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:55.364 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:55.364 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:55.364 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:55.364 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:55.364 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:55.364 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:55.364 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:55.364 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=86428 00:13:55.364 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:55.364 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 86428 00:13:55.364 16:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 86428 ']' 00:13:55.364 16:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.364 16:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:55.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.364 16:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.364 16:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:55.364 16:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.622 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:55.622 Zero copy mechanism will not be used. 00:13:55.623 [2024-12-06 16:29:37.203532] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:13:55.623 [2024-12-06 16:29:37.203672] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86428 ] 00:13:55.623 [2024-12-06 16:29:37.357131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.623 [2024-12-06 16:29:37.387003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.623 [2024-12-06 16:29:37.433483] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:55.623 [2024-12-06 16:29:37.433520] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:56.559 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:56.559 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:13:56.559 16:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:56.559 16:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:56.559 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.559 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.559 BaseBdev1_malloc 00:13:56.559 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.559 16:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:56.559 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.559 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.559 [2024-12-06 16:29:38.098910] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:56.559 [2024-12-06 16:29:38.099034] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.559 [2024-12-06 16:29:38.099093] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:56.559 [2024-12-06 16:29:38.099120] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.559 [2024-12-06 16:29:38.101650] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.559 [2024-12-06 16:29:38.101694] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:56.559 BaseBdev1 00:13:56.559 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.559 16:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:56.559 16:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:56.559 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.559 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.559 BaseBdev2_malloc 00:13:56.559 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.559 16:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:56.559 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.559 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.559 [2024-12-06 16:29:38.128137] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:56.559 [2024-12-06 16:29:38.128197] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.559 [2024-12-06 16:29:38.128234] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:56.559 [2024-12-06 16:29:38.128244] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.559 [2024-12-06 16:29:38.130566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.559 [2024-12-06 16:29:38.130603] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:56.559 BaseBdev2 00:13:56.559 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.559 16:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:56.559 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.559 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.559 spare_malloc 00:13:56.559 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.559 16:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:56.559 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.559 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.559 spare_delay 00:13:56.559 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.559 16:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:56.559 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.559 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.559 [2024-12-06 16:29:38.169012] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:56.559 [2024-12-06 16:29:38.169075] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.559 [2024-12-06 16:29:38.169099] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:56.559 [2024-12-06 16:29:38.169109] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.559 [2024-12-06 16:29:38.171602] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.559 [2024-12-06 16:29:38.171640] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:56.559 spare 00:13:56.559 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.559 16:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:56.559 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.559 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.559 [2024-12-06 16:29:38.181024] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:56.559 [2024-12-06 16:29:38.183211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:56.559 [2024-12-06 16:29:38.183309] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:13:56.559 [2024-12-06 16:29:38.183321] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:56.559 [2024-12-06 16:29:38.183625] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:13:56.559 [2024-12-06 16:29:38.183775] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:13:56.559 [2024-12-06 16:29:38.183796] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:13:56.559 [2024-12-06 16:29:38.183935] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:56.559 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.559 16:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:56.559 16:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:56.559 16:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:56.559 16:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:56.559 16:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:56.560 16:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:56.560 16:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.560 16:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.560 16:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.560 16:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.560 16:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.560 16:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.560 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.560 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.560 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.560 16:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.560 "name": "raid_bdev1", 00:13:56.560 "uuid": "1e33f243-ce75-4887-b665-76bf418796a5", 00:13:56.560 "strip_size_kb": 0, 00:13:56.560 "state": "online", 00:13:56.560 "raid_level": "raid1", 00:13:56.560 "superblock": false, 00:13:56.560 "num_base_bdevs": 2, 00:13:56.560 "num_base_bdevs_discovered": 2, 00:13:56.560 "num_base_bdevs_operational": 2, 00:13:56.560 "base_bdevs_list": [ 00:13:56.560 { 00:13:56.560 "name": "BaseBdev1", 00:13:56.560 "uuid": "fd79479c-a2af-5322-a97b-3b9aada282d6", 00:13:56.560 "is_configured": true, 00:13:56.560 "data_offset": 0, 00:13:56.560 "data_size": 65536 00:13:56.560 }, 00:13:56.560 { 00:13:56.560 "name": "BaseBdev2", 00:13:56.560 "uuid": "64775254-dbb7-54bd-936f-f81e8898943e", 00:13:56.560 "is_configured": true, 00:13:56.560 "data_offset": 0, 00:13:56.560 "data_size": 65536 00:13:56.560 } 00:13:56.560 ] 00:13:56.560 }' 00:13:56.560 16:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.560 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.820 16:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:56.820 16:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:56.820 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.079 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.079 [2024-12-06 16:29:38.664612] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:57.079 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.079 16:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:57.079 16:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.080 16:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:57.080 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.080 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.080 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.080 16:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:57.080 16:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:57.080 16:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:57.080 16:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:57.080 16:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:57.080 16:29:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:57.080 16:29:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:57.080 16:29:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:57.080 16:29:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:57.080 16:29:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:57.080 16:29:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:57.080 16:29:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:57.080 16:29:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:57.080 16:29:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:57.338 [2024-12-06 16:29:38.947826] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:57.338 /dev/nbd0 00:13:57.338 16:29:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:57.338 16:29:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:57.338 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:57.338 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:57.339 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:57.339 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:57.339 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:57.339 16:29:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:57.339 16:29:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:57.339 16:29:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:57.339 16:29:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:57.339 1+0 records in 00:13:57.339 1+0 records out 00:13:57.339 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000337544 s, 12.1 MB/s 00:13:57.339 16:29:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:57.339 16:29:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:57.339 16:29:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:57.339 16:29:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:57.339 16:29:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:57.339 16:29:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:57.339 16:29:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:57.339 16:29:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:57.339 16:29:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:57.339 16:29:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:14:02.613 65536+0 records in 00:14:02.613 65536+0 records out 00:14:02.613 33554432 bytes (34 MB, 32 MiB) copied, 4.55189 s, 7.4 MB/s 00:14:02.613 16:29:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:02.613 16:29:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:02.613 16:29:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:02.613 16:29:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:02.613 16:29:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:02.613 16:29:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:02.613 16:29:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:02.613 [2024-12-06 16:29:43.787217] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.613 16:29:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:02.613 16:29:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:02.613 16:29:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:02.613 16:29:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:02.613 16:29:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:02.613 16:29:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:02.613 16:29:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:02.613 16:29:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:02.613 16:29:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:02.613 16:29:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.613 16:29:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.613 [2024-12-06 16:29:43.823270] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:02.613 16:29:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.613 16:29:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:02.613 16:29:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.613 16:29:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.613 16:29:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:02.613 16:29:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:02.613 16:29:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:02.613 16:29:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.613 16:29:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.613 16:29:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.613 16:29:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.613 16:29:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.613 16:29:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.613 16:29:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.613 16:29:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.613 16:29:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.613 16:29:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.613 "name": "raid_bdev1", 00:14:02.613 "uuid": "1e33f243-ce75-4887-b665-76bf418796a5", 00:14:02.613 "strip_size_kb": 0, 00:14:02.613 "state": "online", 00:14:02.613 "raid_level": "raid1", 00:14:02.613 "superblock": false, 00:14:02.613 "num_base_bdevs": 2, 00:14:02.613 "num_base_bdevs_discovered": 1, 00:14:02.613 "num_base_bdevs_operational": 1, 00:14:02.613 "base_bdevs_list": [ 00:14:02.613 { 00:14:02.613 "name": null, 00:14:02.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.613 "is_configured": false, 00:14:02.613 "data_offset": 0, 00:14:02.613 "data_size": 65536 00:14:02.613 }, 00:14:02.613 { 00:14:02.613 "name": "BaseBdev2", 00:14:02.613 "uuid": "64775254-dbb7-54bd-936f-f81e8898943e", 00:14:02.613 "is_configured": true, 00:14:02.613 "data_offset": 0, 00:14:02.613 "data_size": 65536 00:14:02.613 } 00:14:02.613 ] 00:14:02.613 }' 00:14:02.613 16:29:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.613 16:29:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.613 16:29:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:02.613 16:29:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.613 16:29:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.613 [2024-12-06 16:29:44.298472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:02.613 [2024-12-06 16:29:44.303863] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09a30 00:14:02.613 16:29:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.613 16:29:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:02.613 [2024-12-06 16:29:44.306165] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:03.552 16:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:03.552 16:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.552 16:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:03.552 16:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:03.552 16:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.552 16:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.552 16:29:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.552 16:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.552 16:29:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.552 16:29:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.552 16:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:03.552 "name": "raid_bdev1", 00:14:03.552 "uuid": "1e33f243-ce75-4887-b665-76bf418796a5", 00:14:03.552 "strip_size_kb": 0, 00:14:03.552 "state": "online", 00:14:03.552 "raid_level": "raid1", 00:14:03.552 "superblock": false, 00:14:03.552 "num_base_bdevs": 2, 00:14:03.552 "num_base_bdevs_discovered": 2, 00:14:03.552 "num_base_bdevs_operational": 2, 00:14:03.552 "process": { 00:14:03.552 "type": "rebuild", 00:14:03.552 "target": "spare", 00:14:03.552 "progress": { 00:14:03.552 "blocks": 20480, 00:14:03.552 "percent": 31 00:14:03.552 } 00:14:03.552 }, 00:14:03.552 "base_bdevs_list": [ 00:14:03.552 { 00:14:03.552 "name": "spare", 00:14:03.552 "uuid": "7c6ba103-b2c3-58e3-81f8-80a9e0e11423", 00:14:03.552 "is_configured": true, 00:14:03.552 "data_offset": 0, 00:14:03.552 "data_size": 65536 00:14:03.552 }, 00:14:03.552 { 00:14:03.552 "name": "BaseBdev2", 00:14:03.552 "uuid": "64775254-dbb7-54bd-936f-f81e8898943e", 00:14:03.552 "is_configured": true, 00:14:03.552 "data_offset": 0, 00:14:03.552 "data_size": 65536 00:14:03.552 } 00:14:03.552 ] 00:14:03.552 }' 00:14:03.552 16:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.811 16:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:03.811 16:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.811 16:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:03.811 16:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:03.811 16:29:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.811 16:29:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.811 [2024-12-06 16:29:45.458467] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:03.811 [2024-12-06 16:29:45.512403] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:03.811 [2024-12-06 16:29:45.512504] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:03.811 [2024-12-06 16:29:45.512529] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:03.811 [2024-12-06 16:29:45.512539] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:03.811 16:29:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.811 16:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:03.811 16:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:03.811 16:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:03.811 16:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:03.811 16:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:03.811 16:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:03.811 16:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.811 16:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.811 16:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.811 16:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.811 16:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.811 16:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.811 16:29:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.811 16:29:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.811 16:29:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.811 16:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.811 "name": "raid_bdev1", 00:14:03.811 "uuid": "1e33f243-ce75-4887-b665-76bf418796a5", 00:14:03.811 "strip_size_kb": 0, 00:14:03.811 "state": "online", 00:14:03.811 "raid_level": "raid1", 00:14:03.811 "superblock": false, 00:14:03.811 "num_base_bdevs": 2, 00:14:03.811 "num_base_bdevs_discovered": 1, 00:14:03.811 "num_base_bdevs_operational": 1, 00:14:03.811 "base_bdevs_list": [ 00:14:03.811 { 00:14:03.811 "name": null, 00:14:03.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.811 "is_configured": false, 00:14:03.811 "data_offset": 0, 00:14:03.811 "data_size": 65536 00:14:03.811 }, 00:14:03.811 { 00:14:03.811 "name": "BaseBdev2", 00:14:03.811 "uuid": "64775254-dbb7-54bd-936f-f81e8898943e", 00:14:03.811 "is_configured": true, 00:14:03.811 "data_offset": 0, 00:14:03.811 "data_size": 65536 00:14:03.811 } 00:14:03.811 ] 00:14:03.811 }' 00:14:03.811 16:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.811 16:29:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.379 16:29:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:04.379 16:29:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.379 16:29:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:04.379 16:29:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:04.379 16:29:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.379 16:29:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.379 16:29:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.379 16:29:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.379 16:29:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.379 16:29:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.379 16:29:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.379 "name": "raid_bdev1", 00:14:04.379 "uuid": "1e33f243-ce75-4887-b665-76bf418796a5", 00:14:04.379 "strip_size_kb": 0, 00:14:04.379 "state": "online", 00:14:04.379 "raid_level": "raid1", 00:14:04.379 "superblock": false, 00:14:04.379 "num_base_bdevs": 2, 00:14:04.379 "num_base_bdevs_discovered": 1, 00:14:04.379 "num_base_bdevs_operational": 1, 00:14:04.379 "base_bdevs_list": [ 00:14:04.379 { 00:14:04.379 "name": null, 00:14:04.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.379 "is_configured": false, 00:14:04.379 "data_offset": 0, 00:14:04.379 "data_size": 65536 00:14:04.379 }, 00:14:04.379 { 00:14:04.379 "name": "BaseBdev2", 00:14:04.379 "uuid": "64775254-dbb7-54bd-936f-f81e8898943e", 00:14:04.379 "is_configured": true, 00:14:04.379 "data_offset": 0, 00:14:04.379 "data_size": 65536 00:14:04.379 } 00:14:04.379 ] 00:14:04.379 }' 00:14:04.379 16:29:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.379 16:29:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:04.379 16:29:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.379 16:29:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:04.379 16:29:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:04.379 16:29:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.379 16:29:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.379 [2024-12-06 16:29:46.160929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:04.379 [2024-12-06 16:29:46.166130] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09b00 00:14:04.379 16:29:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.379 16:29:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:04.379 [2024-12-06 16:29:46.168312] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:05.756 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:05.756 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.756 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:05.756 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:05.756 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.756 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.756 16:29:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.756 16:29:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.756 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.756 16:29:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.756 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:05.756 "name": "raid_bdev1", 00:14:05.756 "uuid": "1e33f243-ce75-4887-b665-76bf418796a5", 00:14:05.756 "strip_size_kb": 0, 00:14:05.756 "state": "online", 00:14:05.756 "raid_level": "raid1", 00:14:05.756 "superblock": false, 00:14:05.756 "num_base_bdevs": 2, 00:14:05.756 "num_base_bdevs_discovered": 2, 00:14:05.756 "num_base_bdevs_operational": 2, 00:14:05.756 "process": { 00:14:05.756 "type": "rebuild", 00:14:05.756 "target": "spare", 00:14:05.756 "progress": { 00:14:05.756 "blocks": 20480, 00:14:05.756 "percent": 31 00:14:05.756 } 00:14:05.756 }, 00:14:05.756 "base_bdevs_list": [ 00:14:05.756 { 00:14:05.756 "name": "spare", 00:14:05.756 "uuid": "7c6ba103-b2c3-58e3-81f8-80a9e0e11423", 00:14:05.756 "is_configured": true, 00:14:05.756 "data_offset": 0, 00:14:05.756 "data_size": 65536 00:14:05.756 }, 00:14:05.756 { 00:14:05.756 "name": "BaseBdev2", 00:14:05.756 "uuid": "64775254-dbb7-54bd-936f-f81e8898943e", 00:14:05.756 "is_configured": true, 00:14:05.756 "data_offset": 0, 00:14:05.756 "data_size": 65536 00:14:05.756 } 00:14:05.756 ] 00:14:05.756 }' 00:14:05.756 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:05.756 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:05.756 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:05.756 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:05.756 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:05.756 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:05.756 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:05.756 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:05.756 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=297 00:14:05.756 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:05.756 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:05.756 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.756 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:05.756 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:05.756 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.756 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.756 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.756 16:29:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.756 16:29:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.756 16:29:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.756 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:05.756 "name": "raid_bdev1", 00:14:05.756 "uuid": "1e33f243-ce75-4887-b665-76bf418796a5", 00:14:05.756 "strip_size_kb": 0, 00:14:05.756 "state": "online", 00:14:05.756 "raid_level": "raid1", 00:14:05.756 "superblock": false, 00:14:05.756 "num_base_bdevs": 2, 00:14:05.756 "num_base_bdevs_discovered": 2, 00:14:05.756 "num_base_bdevs_operational": 2, 00:14:05.756 "process": { 00:14:05.756 "type": "rebuild", 00:14:05.756 "target": "spare", 00:14:05.756 "progress": { 00:14:05.756 "blocks": 22528, 00:14:05.756 "percent": 34 00:14:05.756 } 00:14:05.756 }, 00:14:05.756 "base_bdevs_list": [ 00:14:05.757 { 00:14:05.757 "name": "spare", 00:14:05.757 "uuid": "7c6ba103-b2c3-58e3-81f8-80a9e0e11423", 00:14:05.757 "is_configured": true, 00:14:05.757 "data_offset": 0, 00:14:05.757 "data_size": 65536 00:14:05.757 }, 00:14:05.757 { 00:14:05.757 "name": "BaseBdev2", 00:14:05.757 "uuid": "64775254-dbb7-54bd-936f-f81e8898943e", 00:14:05.757 "is_configured": true, 00:14:05.757 "data_offset": 0, 00:14:05.757 "data_size": 65536 00:14:05.757 } 00:14:05.757 ] 00:14:05.757 }' 00:14:05.757 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:05.757 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:05.757 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:05.757 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:05.757 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:06.695 16:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:06.695 16:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:06.695 16:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.695 16:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:06.695 16:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:06.695 16:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.695 16:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.695 16:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.695 16:29:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.695 16:29:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.695 16:29:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.695 16:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.695 "name": "raid_bdev1", 00:14:06.695 "uuid": "1e33f243-ce75-4887-b665-76bf418796a5", 00:14:06.695 "strip_size_kb": 0, 00:14:06.695 "state": "online", 00:14:06.695 "raid_level": "raid1", 00:14:06.695 "superblock": false, 00:14:06.695 "num_base_bdevs": 2, 00:14:06.695 "num_base_bdevs_discovered": 2, 00:14:06.695 "num_base_bdevs_operational": 2, 00:14:06.695 "process": { 00:14:06.695 "type": "rebuild", 00:14:06.695 "target": "spare", 00:14:06.695 "progress": { 00:14:06.695 "blocks": 45056, 00:14:06.695 "percent": 68 00:14:06.695 } 00:14:06.695 }, 00:14:06.695 "base_bdevs_list": [ 00:14:06.695 { 00:14:06.695 "name": "spare", 00:14:06.695 "uuid": "7c6ba103-b2c3-58e3-81f8-80a9e0e11423", 00:14:06.695 "is_configured": true, 00:14:06.695 "data_offset": 0, 00:14:06.695 "data_size": 65536 00:14:06.695 }, 00:14:06.695 { 00:14:06.695 "name": "BaseBdev2", 00:14:06.695 "uuid": "64775254-dbb7-54bd-936f-f81e8898943e", 00:14:06.695 "is_configured": true, 00:14:06.695 "data_offset": 0, 00:14:06.695 "data_size": 65536 00:14:06.695 } 00:14:06.695 ] 00:14:06.695 }' 00:14:06.695 16:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.695 16:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:06.695 16:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.955 16:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:06.955 16:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:07.892 [2024-12-06 16:29:49.382238] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:07.892 [2024-12-06 16:29:49.382435] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:07.892 [2024-12-06 16:29:49.382490] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:07.892 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:07.892 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:07.892 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.892 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:07.892 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:07.892 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.892 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.892 16:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.892 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.892 16:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.892 16:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.892 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.892 "name": "raid_bdev1", 00:14:07.892 "uuid": "1e33f243-ce75-4887-b665-76bf418796a5", 00:14:07.892 "strip_size_kb": 0, 00:14:07.892 "state": "online", 00:14:07.892 "raid_level": "raid1", 00:14:07.892 "superblock": false, 00:14:07.892 "num_base_bdevs": 2, 00:14:07.892 "num_base_bdevs_discovered": 2, 00:14:07.892 "num_base_bdevs_operational": 2, 00:14:07.892 "base_bdevs_list": [ 00:14:07.892 { 00:14:07.892 "name": "spare", 00:14:07.892 "uuid": "7c6ba103-b2c3-58e3-81f8-80a9e0e11423", 00:14:07.892 "is_configured": true, 00:14:07.892 "data_offset": 0, 00:14:07.892 "data_size": 65536 00:14:07.892 }, 00:14:07.892 { 00:14:07.892 "name": "BaseBdev2", 00:14:07.892 "uuid": "64775254-dbb7-54bd-936f-f81e8898943e", 00:14:07.892 "is_configured": true, 00:14:07.892 "data_offset": 0, 00:14:07.892 "data_size": 65536 00:14:07.892 } 00:14:07.892 ] 00:14:07.892 }' 00:14:07.892 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:07.892 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:07.892 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.892 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:07.892 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:07.892 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:07.892 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.892 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:07.892 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:07.892 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.892 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.892 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.892 16:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.892 16:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.892 16:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.152 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:08.152 "name": "raid_bdev1", 00:14:08.152 "uuid": "1e33f243-ce75-4887-b665-76bf418796a5", 00:14:08.152 "strip_size_kb": 0, 00:14:08.152 "state": "online", 00:14:08.152 "raid_level": "raid1", 00:14:08.152 "superblock": false, 00:14:08.152 "num_base_bdevs": 2, 00:14:08.152 "num_base_bdevs_discovered": 2, 00:14:08.152 "num_base_bdevs_operational": 2, 00:14:08.152 "base_bdevs_list": [ 00:14:08.152 { 00:14:08.152 "name": "spare", 00:14:08.152 "uuid": "7c6ba103-b2c3-58e3-81f8-80a9e0e11423", 00:14:08.152 "is_configured": true, 00:14:08.152 "data_offset": 0, 00:14:08.152 "data_size": 65536 00:14:08.152 }, 00:14:08.152 { 00:14:08.152 "name": "BaseBdev2", 00:14:08.152 "uuid": "64775254-dbb7-54bd-936f-f81e8898943e", 00:14:08.152 "is_configured": true, 00:14:08.152 "data_offset": 0, 00:14:08.152 "data_size": 65536 00:14:08.152 } 00:14:08.152 ] 00:14:08.152 }' 00:14:08.152 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:08.152 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:08.152 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:08.152 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:08.152 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:08.152 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:08.152 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:08.152 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:08.152 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:08.152 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:08.152 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.152 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.152 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.152 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.152 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.152 16:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.152 16:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.152 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.152 16:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.152 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.152 "name": "raid_bdev1", 00:14:08.152 "uuid": "1e33f243-ce75-4887-b665-76bf418796a5", 00:14:08.152 "strip_size_kb": 0, 00:14:08.152 "state": "online", 00:14:08.152 "raid_level": "raid1", 00:14:08.152 "superblock": false, 00:14:08.152 "num_base_bdevs": 2, 00:14:08.152 "num_base_bdevs_discovered": 2, 00:14:08.152 "num_base_bdevs_operational": 2, 00:14:08.152 "base_bdevs_list": [ 00:14:08.152 { 00:14:08.152 "name": "spare", 00:14:08.152 "uuid": "7c6ba103-b2c3-58e3-81f8-80a9e0e11423", 00:14:08.152 "is_configured": true, 00:14:08.152 "data_offset": 0, 00:14:08.152 "data_size": 65536 00:14:08.152 }, 00:14:08.152 { 00:14:08.152 "name": "BaseBdev2", 00:14:08.152 "uuid": "64775254-dbb7-54bd-936f-f81e8898943e", 00:14:08.152 "is_configured": true, 00:14:08.152 "data_offset": 0, 00:14:08.152 "data_size": 65536 00:14:08.152 } 00:14:08.152 ] 00:14:08.152 }' 00:14:08.152 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.152 16:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.743 16:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:08.743 16:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.743 16:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.743 [2024-12-06 16:29:50.278098] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:08.743 [2024-12-06 16:29:50.278132] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:08.743 [2024-12-06 16:29:50.278238] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:08.743 [2024-12-06 16:29:50.278311] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:08.743 [2024-12-06 16:29:50.278329] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:14:08.743 16:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.743 16:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.743 16:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:08.743 16:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.743 16:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.743 16:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.743 16:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:08.743 16:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:08.743 16:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:08.743 16:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:08.743 16:29:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:08.743 16:29:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:08.743 16:29:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:08.743 16:29:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:08.743 16:29:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:08.743 16:29:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:08.743 16:29:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:08.743 16:29:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:08.743 16:29:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:08.743 /dev/nbd0 00:14:09.003 16:29:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:09.003 16:29:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:09.003 16:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:09.003 16:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:09.003 16:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:09.003 16:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:09.003 16:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:09.003 16:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:09.003 16:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:09.003 16:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:09.003 16:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:09.003 1+0 records in 00:14:09.003 1+0 records out 00:14:09.003 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000624317 s, 6.6 MB/s 00:14:09.003 16:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:09.003 16:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:09.003 16:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:09.003 16:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:09.003 16:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:09.003 16:29:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:09.003 16:29:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:09.003 16:29:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:09.263 /dev/nbd1 00:14:09.263 16:29:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:09.263 16:29:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:09.263 16:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:09.263 16:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:09.263 16:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:09.263 16:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:09.263 16:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:09.263 16:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:09.263 16:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:09.263 16:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:09.263 16:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:09.263 1+0 records in 00:14:09.263 1+0 records out 00:14:09.263 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321807 s, 12.7 MB/s 00:14:09.263 16:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:09.263 16:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:09.263 16:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:09.263 16:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:09.263 16:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:09.263 16:29:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:09.263 16:29:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:09.263 16:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:09.263 16:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:09.263 16:29:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:09.263 16:29:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:09.263 16:29:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:09.263 16:29:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:09.263 16:29:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:09.263 16:29:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:09.523 16:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:09.523 16:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:09.523 16:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:09.523 16:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:09.523 16:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:09.523 16:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:09.523 16:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:09.523 16:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:09.523 16:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:09.523 16:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:09.782 16:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:09.782 16:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:09.782 16:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:09.782 16:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:09.782 16:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:09.782 16:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:09.782 16:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:09.782 16:29:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:09.782 16:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:09.782 16:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 86428 00:14:09.782 16:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 86428 ']' 00:14:09.782 16:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 86428 00:14:09.782 16:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:14:09.782 16:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:09.782 16:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86428 00:14:09.782 16:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:09.782 killing process with pid 86428 00:14:09.782 Received shutdown signal, test time was about 60.000000 seconds 00:14:09.782 00:14:09.782 Latency(us) 00:14:09.782 [2024-12-06T16:29:51.621Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:09.782 [2024-12-06T16:29:51.621Z] =================================================================================================================== 00:14:09.782 [2024-12-06T16:29:51.621Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:09.782 16:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:09.782 16:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86428' 00:14:09.782 16:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 86428 00:14:09.782 [2024-12-06 16:29:51.531274] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:09.782 16:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 86428 00:14:09.782 [2024-12-06 16:29:51.563719] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:10.042 16:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:10.042 00:14:10.042 real 0m14.676s 00:14:10.042 user 0m16.776s 00:14:10.042 sys 0m3.084s 00:14:10.042 ************************************ 00:14:10.042 END TEST raid_rebuild_test 00:14:10.042 ************************************ 00:14:10.042 16:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:10.042 16:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.042 16:29:51 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:14:10.042 16:29:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:10.042 16:29:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:10.042 16:29:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:10.042 ************************************ 00:14:10.042 START TEST raid_rebuild_test_sb 00:14:10.042 ************************************ 00:14:10.042 16:29:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:14:10.042 16:29:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:10.042 16:29:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:10.042 16:29:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:10.042 16:29:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:10.042 16:29:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:10.042 16:29:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:10.042 16:29:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:10.042 16:29:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:10.042 16:29:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:10.042 16:29:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:10.042 16:29:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:10.042 16:29:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:10.042 16:29:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:10.042 16:29:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:10.042 16:29:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:10.042 16:29:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:10.042 16:29:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:10.042 16:29:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:10.042 16:29:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:10.042 16:29:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:10.042 16:29:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:10.042 16:29:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:10.042 16:29:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:10.042 16:29:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:10.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.042 16:29:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=86839 00:14:10.042 16:29:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 86839 00:14:10.042 16:29:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 86839 ']' 00:14:10.042 16:29:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.042 16:29:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:10.042 16:29:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:10.042 16:29:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.042 16:29:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:10.042 16:29:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.302 [2024-12-06 16:29:51.956930] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:14:10.302 [2024-12-06 16:29:51.957143] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86839 ] 00:14:10.302 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:10.302 Zero copy mechanism will not be used. 00:14:10.302 [2024-12-06 16:29:52.128287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.562 [2024-12-06 16:29:52.156048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.562 [2024-12-06 16:29:52.200483] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:10.562 [2024-12-06 16:29:52.200603] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:11.130 16:29:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:11.130 16:29:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:11.130 16:29:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:11.130 16:29:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:11.130 16:29:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.130 16:29:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.130 BaseBdev1_malloc 00:14:11.130 16:29:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.130 16:29:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:11.130 16:29:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.130 16:29:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.130 [2024-12-06 16:29:52.853332] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:11.130 [2024-12-06 16:29:52.853400] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.130 [2024-12-06 16:29:52.853430] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:11.130 [2024-12-06 16:29:52.853452] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.130 [2024-12-06 16:29:52.855907] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.130 [2024-12-06 16:29:52.856010] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:11.130 BaseBdev1 00:14:11.130 16:29:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.130 16:29:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:11.130 16:29:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:11.130 16:29:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.130 16:29:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.130 BaseBdev2_malloc 00:14:11.130 16:29:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.130 16:29:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:11.130 16:29:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.130 16:29:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.130 [2024-12-06 16:29:52.882441] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:11.130 [2024-12-06 16:29:52.882558] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.130 [2024-12-06 16:29:52.882602] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:11.130 [2024-12-06 16:29:52.882637] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.130 [2024-12-06 16:29:52.885080] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.130 [2024-12-06 16:29:52.885174] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:11.130 BaseBdev2 00:14:11.130 16:29:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.130 16:29:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:11.130 16:29:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.130 16:29:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.130 spare_malloc 00:14:11.130 16:29:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.130 16:29:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:11.130 16:29:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.130 16:29:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.130 spare_delay 00:14:11.130 16:29:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.130 16:29:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:11.130 16:29:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.130 16:29:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.130 [2024-12-06 16:29:52.923622] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:11.130 [2024-12-06 16:29:52.923751] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.130 [2024-12-06 16:29:52.923813] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:11.130 [2024-12-06 16:29:52.923853] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.130 [2024-12-06 16:29:52.926347] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.130 [2024-12-06 16:29:52.926435] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:11.130 spare 00:14:11.131 16:29:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.131 16:29:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:11.131 16:29:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.131 16:29:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.131 [2024-12-06 16:29:52.935636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:11.131 [2024-12-06 16:29:52.937726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:11.131 [2024-12-06 16:29:52.937948] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:14:11.131 [2024-12-06 16:29:52.938000] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:11.131 [2024-12-06 16:29:52.938339] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:14:11.131 [2024-12-06 16:29:52.938529] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:14:11.131 [2024-12-06 16:29:52.938587] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:14:11.131 [2024-12-06 16:29:52.938787] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:11.131 16:29:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.131 16:29:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:11.131 16:29:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:11.131 16:29:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:11.131 16:29:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:11.131 16:29:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:11.131 16:29:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:11.131 16:29:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.131 16:29:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.131 16:29:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.131 16:29:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.131 16:29:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.131 16:29:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.131 16:29:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.131 16:29:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.131 16:29:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.389 16:29:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.389 "name": "raid_bdev1", 00:14:11.389 "uuid": "7abe9696-fcfe-4b69-a7f9-4f324d1b5994", 00:14:11.389 "strip_size_kb": 0, 00:14:11.389 "state": "online", 00:14:11.389 "raid_level": "raid1", 00:14:11.389 "superblock": true, 00:14:11.389 "num_base_bdevs": 2, 00:14:11.389 "num_base_bdevs_discovered": 2, 00:14:11.389 "num_base_bdevs_operational": 2, 00:14:11.389 "base_bdevs_list": [ 00:14:11.389 { 00:14:11.389 "name": "BaseBdev1", 00:14:11.389 "uuid": "7b7c0fd6-e5d4-55f5-bb93-4cf1e49375e9", 00:14:11.389 "is_configured": true, 00:14:11.389 "data_offset": 2048, 00:14:11.389 "data_size": 63488 00:14:11.389 }, 00:14:11.389 { 00:14:11.389 "name": "BaseBdev2", 00:14:11.389 "uuid": "daabe8e0-570f-5841-bfbb-0eb25641701a", 00:14:11.389 "is_configured": true, 00:14:11.389 "data_offset": 2048, 00:14:11.389 "data_size": 63488 00:14:11.389 } 00:14:11.389 ] 00:14:11.389 }' 00:14:11.389 16:29:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.389 16:29:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.648 16:29:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:11.648 16:29:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:11.648 16:29:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.648 16:29:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.648 [2024-12-06 16:29:53.427098] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:11.648 16:29:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.648 16:29:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:11.648 16:29:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.648 16:29:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:11.648 16:29:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.648 16:29:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.648 16:29:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.908 16:29:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:11.908 16:29:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:11.908 16:29:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:11.908 16:29:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:11.908 16:29:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:11.908 16:29:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:11.908 16:29:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:11.908 16:29:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:11.908 16:29:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:11.908 16:29:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:11.908 16:29:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:11.908 16:29:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:11.908 16:29:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:11.908 16:29:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:11.908 [2024-12-06 16:29:53.710403] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:11.908 /dev/nbd0 00:14:12.166 16:29:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:12.166 16:29:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:12.166 16:29:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:12.166 16:29:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:12.166 16:29:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:12.166 16:29:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:12.166 16:29:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:12.166 16:29:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:12.166 16:29:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:12.166 16:29:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:12.166 16:29:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:12.166 1+0 records in 00:14:12.166 1+0 records out 00:14:12.166 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000566374 s, 7.2 MB/s 00:14:12.166 16:29:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:12.166 16:29:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:12.166 16:29:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:12.166 16:29:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:12.166 16:29:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:12.166 16:29:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:12.166 16:29:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:12.166 16:29:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:12.166 16:29:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:12.166 16:29:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:14:16.353 63488+0 records in 00:14:16.353 63488+0 records out 00:14:16.353 32505856 bytes (33 MB, 31 MiB) copied, 4.04876 s, 8.0 MB/s 00:14:16.354 16:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:16.354 16:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:16.354 16:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:16.354 16:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:16.354 16:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:16.354 16:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:16.354 16:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:16.354 16:29:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:16.354 [2024-12-06 16:29:58.090412] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:16.354 16:29:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:16.354 16:29:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:16.354 16:29:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:16.354 16:29:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:16.354 16:29:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:16.354 16:29:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:16.354 16:29:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:16.354 16:29:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:16.354 16:29:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.354 16:29:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.354 [2024-12-06 16:29:58.110497] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:16.354 16:29:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.354 16:29:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:16.354 16:29:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:16.354 16:29:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:16.354 16:29:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:16.354 16:29:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:16.354 16:29:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:16.354 16:29:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.354 16:29:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.354 16:29:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.354 16:29:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.354 16:29:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.354 16:29:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.354 16:29:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.354 16:29:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.354 16:29:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.354 16:29:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.354 "name": "raid_bdev1", 00:14:16.354 "uuid": "7abe9696-fcfe-4b69-a7f9-4f324d1b5994", 00:14:16.354 "strip_size_kb": 0, 00:14:16.354 "state": "online", 00:14:16.354 "raid_level": "raid1", 00:14:16.354 "superblock": true, 00:14:16.354 "num_base_bdevs": 2, 00:14:16.354 "num_base_bdevs_discovered": 1, 00:14:16.354 "num_base_bdevs_operational": 1, 00:14:16.354 "base_bdevs_list": [ 00:14:16.354 { 00:14:16.354 "name": null, 00:14:16.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.354 "is_configured": false, 00:14:16.354 "data_offset": 0, 00:14:16.354 "data_size": 63488 00:14:16.354 }, 00:14:16.354 { 00:14:16.354 "name": "BaseBdev2", 00:14:16.354 "uuid": "daabe8e0-570f-5841-bfbb-0eb25641701a", 00:14:16.354 "is_configured": true, 00:14:16.354 "data_offset": 2048, 00:14:16.354 "data_size": 63488 00:14:16.354 } 00:14:16.354 ] 00:14:16.354 }' 00:14:16.354 16:29:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.354 16:29:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.922 16:29:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:16.922 16:29:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.922 16:29:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.922 [2024-12-06 16:29:58.529827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:16.922 [2024-12-06 16:29:58.535077] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca31c0 00:14:16.922 16:29:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.922 16:29:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:16.922 [2024-12-06 16:29:58.537346] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:17.860 16:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:17.860 16:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:17.860 16:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:17.860 16:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:17.860 16:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:17.860 16:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.860 16:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.860 16:29:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.860 16:29:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.860 16:29:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.860 16:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:17.860 "name": "raid_bdev1", 00:14:17.860 "uuid": "7abe9696-fcfe-4b69-a7f9-4f324d1b5994", 00:14:17.860 "strip_size_kb": 0, 00:14:17.860 "state": "online", 00:14:17.860 "raid_level": "raid1", 00:14:17.860 "superblock": true, 00:14:17.860 "num_base_bdevs": 2, 00:14:17.860 "num_base_bdevs_discovered": 2, 00:14:17.860 "num_base_bdevs_operational": 2, 00:14:17.860 "process": { 00:14:17.860 "type": "rebuild", 00:14:17.860 "target": "spare", 00:14:17.860 "progress": { 00:14:17.860 "blocks": 20480, 00:14:17.860 "percent": 32 00:14:17.860 } 00:14:17.860 }, 00:14:17.860 "base_bdevs_list": [ 00:14:17.860 { 00:14:17.860 "name": "spare", 00:14:17.860 "uuid": "22e37688-44ea-53e0-9bef-34906e788b72", 00:14:17.860 "is_configured": true, 00:14:17.860 "data_offset": 2048, 00:14:17.860 "data_size": 63488 00:14:17.860 }, 00:14:17.860 { 00:14:17.860 "name": "BaseBdev2", 00:14:17.860 "uuid": "daabe8e0-570f-5841-bfbb-0eb25641701a", 00:14:17.860 "is_configured": true, 00:14:17.860 "data_offset": 2048, 00:14:17.860 "data_size": 63488 00:14:17.860 } 00:14:17.860 ] 00:14:17.860 }' 00:14:17.860 16:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:17.860 16:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:17.860 16:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:17.860 16:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:17.860 16:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:17.860 16:29:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.860 16:29:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.860 [2024-12-06 16:29:59.677587] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:18.118 [2024-12-06 16:29:59.743230] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:18.118 [2024-12-06 16:29:59.743333] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:18.118 [2024-12-06 16:29:59.743360] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:18.118 [2024-12-06 16:29:59.743371] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:18.118 16:29:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.118 16:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:18.118 16:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:18.118 16:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:18.118 16:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:18.118 16:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:18.118 16:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:18.118 16:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.118 16:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.118 16:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.118 16:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.118 16:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.118 16:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.118 16:29:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.118 16:29:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.118 16:29:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.118 16:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.118 "name": "raid_bdev1", 00:14:18.118 "uuid": "7abe9696-fcfe-4b69-a7f9-4f324d1b5994", 00:14:18.118 "strip_size_kb": 0, 00:14:18.118 "state": "online", 00:14:18.118 "raid_level": "raid1", 00:14:18.118 "superblock": true, 00:14:18.118 "num_base_bdevs": 2, 00:14:18.118 "num_base_bdevs_discovered": 1, 00:14:18.118 "num_base_bdevs_operational": 1, 00:14:18.118 "base_bdevs_list": [ 00:14:18.118 { 00:14:18.118 "name": null, 00:14:18.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.118 "is_configured": false, 00:14:18.118 "data_offset": 0, 00:14:18.118 "data_size": 63488 00:14:18.118 }, 00:14:18.118 { 00:14:18.118 "name": "BaseBdev2", 00:14:18.118 "uuid": "daabe8e0-570f-5841-bfbb-0eb25641701a", 00:14:18.118 "is_configured": true, 00:14:18.118 "data_offset": 2048, 00:14:18.118 "data_size": 63488 00:14:18.118 } 00:14:18.118 ] 00:14:18.118 }' 00:14:18.118 16:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.118 16:29:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.692 16:30:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:18.692 16:30:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:18.692 16:30:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:18.692 16:30:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:18.692 16:30:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:18.692 16:30:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.692 16:30:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.692 16:30:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.692 16:30:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.692 16:30:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.692 16:30:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.692 "name": "raid_bdev1", 00:14:18.692 "uuid": "7abe9696-fcfe-4b69-a7f9-4f324d1b5994", 00:14:18.692 "strip_size_kb": 0, 00:14:18.692 "state": "online", 00:14:18.692 "raid_level": "raid1", 00:14:18.692 "superblock": true, 00:14:18.692 "num_base_bdevs": 2, 00:14:18.692 "num_base_bdevs_discovered": 1, 00:14:18.692 "num_base_bdevs_operational": 1, 00:14:18.692 "base_bdevs_list": [ 00:14:18.692 { 00:14:18.692 "name": null, 00:14:18.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.692 "is_configured": false, 00:14:18.692 "data_offset": 0, 00:14:18.692 "data_size": 63488 00:14:18.692 }, 00:14:18.692 { 00:14:18.692 "name": "BaseBdev2", 00:14:18.692 "uuid": "daabe8e0-570f-5841-bfbb-0eb25641701a", 00:14:18.692 "is_configured": true, 00:14:18.692 "data_offset": 2048, 00:14:18.692 "data_size": 63488 00:14:18.692 } 00:14:18.692 ] 00:14:18.692 }' 00:14:18.692 16:30:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:18.692 16:30:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:18.692 16:30:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.692 16:30:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:18.692 16:30:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:18.692 16:30:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.692 16:30:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.692 [2024-12-06 16:30:00.387718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:18.692 [2024-12-06 16:30:00.393065] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3290 00:14:18.692 16:30:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.692 16:30:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:18.692 [2024-12-06 16:30:00.395420] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:19.627 16:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:19.627 16:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:19.627 16:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:19.627 16:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:19.627 16:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:19.627 16:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.627 16:30:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.627 16:30:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.627 16:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.627 16:30:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.627 16:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:19.627 "name": "raid_bdev1", 00:14:19.627 "uuid": "7abe9696-fcfe-4b69-a7f9-4f324d1b5994", 00:14:19.627 "strip_size_kb": 0, 00:14:19.627 "state": "online", 00:14:19.627 "raid_level": "raid1", 00:14:19.627 "superblock": true, 00:14:19.627 "num_base_bdevs": 2, 00:14:19.627 "num_base_bdevs_discovered": 2, 00:14:19.627 "num_base_bdevs_operational": 2, 00:14:19.627 "process": { 00:14:19.627 "type": "rebuild", 00:14:19.627 "target": "spare", 00:14:19.627 "progress": { 00:14:19.627 "blocks": 20480, 00:14:19.627 "percent": 32 00:14:19.627 } 00:14:19.627 }, 00:14:19.627 "base_bdevs_list": [ 00:14:19.627 { 00:14:19.627 "name": "spare", 00:14:19.627 "uuid": "22e37688-44ea-53e0-9bef-34906e788b72", 00:14:19.627 "is_configured": true, 00:14:19.627 "data_offset": 2048, 00:14:19.627 "data_size": 63488 00:14:19.627 }, 00:14:19.627 { 00:14:19.627 "name": "BaseBdev2", 00:14:19.627 "uuid": "daabe8e0-570f-5841-bfbb-0eb25641701a", 00:14:19.627 "is_configured": true, 00:14:19.627 "data_offset": 2048, 00:14:19.627 "data_size": 63488 00:14:19.627 } 00:14:19.627 ] 00:14:19.627 }' 00:14:19.627 16:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:19.885 16:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:19.885 16:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:19.885 16:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:19.885 16:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:19.885 16:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:19.885 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:19.885 16:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:19.885 16:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:19.885 16:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:19.885 16:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=311 00:14:19.885 16:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:19.885 16:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:19.885 16:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:19.885 16:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:19.885 16:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:19.885 16:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:19.885 16:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.885 16:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.885 16:30:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.885 16:30:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.885 16:30:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.885 16:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:19.885 "name": "raid_bdev1", 00:14:19.885 "uuid": "7abe9696-fcfe-4b69-a7f9-4f324d1b5994", 00:14:19.885 "strip_size_kb": 0, 00:14:19.885 "state": "online", 00:14:19.885 "raid_level": "raid1", 00:14:19.885 "superblock": true, 00:14:19.885 "num_base_bdevs": 2, 00:14:19.885 "num_base_bdevs_discovered": 2, 00:14:19.885 "num_base_bdevs_operational": 2, 00:14:19.885 "process": { 00:14:19.885 "type": "rebuild", 00:14:19.885 "target": "spare", 00:14:19.885 "progress": { 00:14:19.885 "blocks": 22528, 00:14:19.885 "percent": 35 00:14:19.885 } 00:14:19.885 }, 00:14:19.885 "base_bdevs_list": [ 00:14:19.885 { 00:14:19.885 "name": "spare", 00:14:19.885 "uuid": "22e37688-44ea-53e0-9bef-34906e788b72", 00:14:19.885 "is_configured": true, 00:14:19.885 "data_offset": 2048, 00:14:19.885 "data_size": 63488 00:14:19.885 }, 00:14:19.885 { 00:14:19.885 "name": "BaseBdev2", 00:14:19.885 "uuid": "daabe8e0-570f-5841-bfbb-0eb25641701a", 00:14:19.885 "is_configured": true, 00:14:19.885 "data_offset": 2048, 00:14:19.885 "data_size": 63488 00:14:19.885 } 00:14:19.885 ] 00:14:19.885 }' 00:14:19.885 16:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:19.885 16:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:19.885 16:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:19.885 16:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:19.885 16:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:21.261 16:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:21.261 16:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:21.261 16:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:21.261 16:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:21.261 16:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:21.261 16:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:21.261 16:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.261 16:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.261 16:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.261 16:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.261 16:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.261 16:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:21.261 "name": "raid_bdev1", 00:14:21.261 "uuid": "7abe9696-fcfe-4b69-a7f9-4f324d1b5994", 00:14:21.261 "strip_size_kb": 0, 00:14:21.261 "state": "online", 00:14:21.261 "raid_level": "raid1", 00:14:21.261 "superblock": true, 00:14:21.261 "num_base_bdevs": 2, 00:14:21.261 "num_base_bdevs_discovered": 2, 00:14:21.261 "num_base_bdevs_operational": 2, 00:14:21.261 "process": { 00:14:21.261 "type": "rebuild", 00:14:21.261 "target": "spare", 00:14:21.261 "progress": { 00:14:21.261 "blocks": 45056, 00:14:21.261 "percent": 70 00:14:21.261 } 00:14:21.261 }, 00:14:21.261 "base_bdevs_list": [ 00:14:21.261 { 00:14:21.261 "name": "spare", 00:14:21.261 "uuid": "22e37688-44ea-53e0-9bef-34906e788b72", 00:14:21.261 "is_configured": true, 00:14:21.261 "data_offset": 2048, 00:14:21.261 "data_size": 63488 00:14:21.261 }, 00:14:21.261 { 00:14:21.261 "name": "BaseBdev2", 00:14:21.261 "uuid": "daabe8e0-570f-5841-bfbb-0eb25641701a", 00:14:21.261 "is_configured": true, 00:14:21.261 "data_offset": 2048, 00:14:21.261 "data_size": 63488 00:14:21.261 } 00:14:21.261 ] 00:14:21.261 }' 00:14:21.261 16:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:21.261 16:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:21.261 16:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:21.261 16:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:21.261 16:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:21.831 [2024-12-06 16:30:03.509000] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:21.831 [2024-12-06 16:30:03.509213] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:21.831 [2024-12-06 16:30:03.509369] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:22.090 16:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:22.090 16:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:22.090 16:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.090 16:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:22.090 16:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:22.090 16:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.090 16:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.090 16:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.090 16:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.090 16:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.090 16:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.090 16:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.090 "name": "raid_bdev1", 00:14:22.090 "uuid": "7abe9696-fcfe-4b69-a7f9-4f324d1b5994", 00:14:22.090 "strip_size_kb": 0, 00:14:22.090 "state": "online", 00:14:22.090 "raid_level": "raid1", 00:14:22.090 "superblock": true, 00:14:22.090 "num_base_bdevs": 2, 00:14:22.090 "num_base_bdevs_discovered": 2, 00:14:22.090 "num_base_bdevs_operational": 2, 00:14:22.090 "base_bdevs_list": [ 00:14:22.090 { 00:14:22.090 "name": "spare", 00:14:22.090 "uuid": "22e37688-44ea-53e0-9bef-34906e788b72", 00:14:22.090 "is_configured": true, 00:14:22.090 "data_offset": 2048, 00:14:22.090 "data_size": 63488 00:14:22.090 }, 00:14:22.090 { 00:14:22.090 "name": "BaseBdev2", 00:14:22.090 "uuid": "daabe8e0-570f-5841-bfbb-0eb25641701a", 00:14:22.090 "is_configured": true, 00:14:22.090 "data_offset": 2048, 00:14:22.090 "data_size": 63488 00:14:22.090 } 00:14:22.090 ] 00:14:22.090 }' 00:14:22.090 16:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.091 16:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:22.091 16:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.350 16:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:22.350 16:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:22.350 16:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:22.350 16:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.350 16:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:22.350 16:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:22.350 16:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.350 16:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.350 16:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.350 16:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.350 16:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.350 16:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.350 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.350 "name": "raid_bdev1", 00:14:22.350 "uuid": "7abe9696-fcfe-4b69-a7f9-4f324d1b5994", 00:14:22.350 "strip_size_kb": 0, 00:14:22.350 "state": "online", 00:14:22.350 "raid_level": "raid1", 00:14:22.350 "superblock": true, 00:14:22.350 "num_base_bdevs": 2, 00:14:22.350 "num_base_bdevs_discovered": 2, 00:14:22.350 "num_base_bdevs_operational": 2, 00:14:22.350 "base_bdevs_list": [ 00:14:22.350 { 00:14:22.350 "name": "spare", 00:14:22.350 "uuid": "22e37688-44ea-53e0-9bef-34906e788b72", 00:14:22.350 "is_configured": true, 00:14:22.350 "data_offset": 2048, 00:14:22.350 "data_size": 63488 00:14:22.350 }, 00:14:22.350 { 00:14:22.350 "name": "BaseBdev2", 00:14:22.350 "uuid": "daabe8e0-570f-5841-bfbb-0eb25641701a", 00:14:22.350 "is_configured": true, 00:14:22.350 "data_offset": 2048, 00:14:22.350 "data_size": 63488 00:14:22.350 } 00:14:22.350 ] 00:14:22.350 }' 00:14:22.350 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.350 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:22.350 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.350 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:22.350 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:22.350 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:22.350 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:22.350 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:22.350 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:22.350 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:22.350 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.350 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.350 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.350 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.350 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.350 16:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.350 16:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.350 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.350 16:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.350 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.350 "name": "raid_bdev1", 00:14:22.351 "uuid": "7abe9696-fcfe-4b69-a7f9-4f324d1b5994", 00:14:22.351 "strip_size_kb": 0, 00:14:22.351 "state": "online", 00:14:22.351 "raid_level": "raid1", 00:14:22.351 "superblock": true, 00:14:22.351 "num_base_bdevs": 2, 00:14:22.351 "num_base_bdevs_discovered": 2, 00:14:22.351 "num_base_bdevs_operational": 2, 00:14:22.351 "base_bdevs_list": [ 00:14:22.351 { 00:14:22.351 "name": "spare", 00:14:22.351 "uuid": "22e37688-44ea-53e0-9bef-34906e788b72", 00:14:22.351 "is_configured": true, 00:14:22.351 "data_offset": 2048, 00:14:22.351 "data_size": 63488 00:14:22.351 }, 00:14:22.351 { 00:14:22.351 "name": "BaseBdev2", 00:14:22.351 "uuid": "daabe8e0-570f-5841-bfbb-0eb25641701a", 00:14:22.351 "is_configured": true, 00:14:22.351 "data_offset": 2048, 00:14:22.351 "data_size": 63488 00:14:22.351 } 00:14:22.351 ] 00:14:22.351 }' 00:14:22.351 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.351 16:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.932 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:22.932 16:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.932 16:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.932 [2024-12-06 16:30:04.524571] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:22.932 [2024-12-06 16:30:04.524676] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:22.932 [2024-12-06 16:30:04.524809] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:22.932 [2024-12-06 16:30:04.524917] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:22.932 [2024-12-06 16:30:04.524975] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:14:22.932 16:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.932 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.932 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:22.932 16:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.932 16:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.932 16:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.932 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:22.932 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:22.932 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:22.932 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:22.932 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:22.932 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:22.932 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:22.932 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:22.932 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:22.932 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:22.932 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:22.932 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:22.932 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:23.190 /dev/nbd0 00:14:23.190 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:23.190 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:23.190 16:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:23.190 16:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:23.190 16:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:23.190 16:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:23.190 16:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:23.190 16:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:23.190 16:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:23.190 16:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:23.190 16:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:23.190 1+0 records in 00:14:23.190 1+0 records out 00:14:23.190 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000562416 s, 7.3 MB/s 00:14:23.190 16:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.190 16:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:23.190 16:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.190 16:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:23.190 16:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:23.190 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:23.190 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:23.190 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:23.449 /dev/nbd1 00:14:23.449 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:23.449 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:23.449 16:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:23.449 16:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:23.449 16:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:23.449 16:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:23.449 16:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:23.449 16:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:23.449 16:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:23.449 16:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:23.449 16:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:23.449 1+0 records in 00:14:23.449 1+0 records out 00:14:23.449 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000427258 s, 9.6 MB/s 00:14:23.449 16:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.449 16:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:23.449 16:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.449 16:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:23.449 16:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:23.449 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:23.449 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:23.449 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:23.449 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:23.449 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:23.449 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:23.449 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:23.449 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:23.449 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:23.449 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:23.709 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:23.709 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:23.709 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:23.709 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:23.709 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:23.709 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:23.709 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:23.709 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:23.709 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:23.709 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:23.966 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:23.966 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:23.966 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:23.966 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:23.966 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:23.966 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:23.966 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:23.967 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:23.967 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:23.967 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:23.967 16:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.967 16:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.967 16:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.967 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:23.967 16:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.967 16:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.967 [2024-12-06 16:30:05.689846] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:23.967 [2024-12-06 16:30:05.689928] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:23.967 [2024-12-06 16:30:05.689957] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:23.967 [2024-12-06 16:30:05.689972] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:23.967 [2024-12-06 16:30:05.692888] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:23.967 [2024-12-06 16:30:05.692944] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:23.967 [2024-12-06 16:30:05.693061] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:23.967 [2024-12-06 16:30:05.693111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:23.967 [2024-12-06 16:30:05.693299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:23.967 spare 00:14:23.967 16:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.967 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:23.967 16:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.967 16:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.967 [2024-12-06 16:30:05.793260] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:14:23.967 [2024-12-06 16:30:05.793307] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:23.967 [2024-12-06 16:30:05.793703] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1940 00:14:23.967 [2024-12-06 16:30:05.793914] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:14:23.967 [2024-12-06 16:30:05.793930] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:14:23.967 [2024-12-06 16:30:05.794148] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:23.967 16:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.967 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:23.967 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:23.967 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:23.967 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:23.967 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:23.967 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:23.967 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.967 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.967 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.967 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.225 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.225 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.225 16:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.225 16:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.225 16:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.225 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.225 "name": "raid_bdev1", 00:14:24.225 "uuid": "7abe9696-fcfe-4b69-a7f9-4f324d1b5994", 00:14:24.225 "strip_size_kb": 0, 00:14:24.225 "state": "online", 00:14:24.225 "raid_level": "raid1", 00:14:24.225 "superblock": true, 00:14:24.225 "num_base_bdevs": 2, 00:14:24.225 "num_base_bdevs_discovered": 2, 00:14:24.225 "num_base_bdevs_operational": 2, 00:14:24.225 "base_bdevs_list": [ 00:14:24.225 { 00:14:24.225 "name": "spare", 00:14:24.225 "uuid": "22e37688-44ea-53e0-9bef-34906e788b72", 00:14:24.225 "is_configured": true, 00:14:24.225 "data_offset": 2048, 00:14:24.225 "data_size": 63488 00:14:24.225 }, 00:14:24.225 { 00:14:24.225 "name": "BaseBdev2", 00:14:24.225 "uuid": "daabe8e0-570f-5841-bfbb-0eb25641701a", 00:14:24.225 "is_configured": true, 00:14:24.225 "data_offset": 2048, 00:14:24.225 "data_size": 63488 00:14:24.225 } 00:14:24.225 ] 00:14:24.225 }' 00:14:24.225 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.225 16:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.483 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:24.483 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:24.483 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:24.483 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:24.483 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:24.483 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.483 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.483 16:30:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.483 16:30:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.483 16:30:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.483 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.483 "name": "raid_bdev1", 00:14:24.483 "uuid": "7abe9696-fcfe-4b69-a7f9-4f324d1b5994", 00:14:24.483 "strip_size_kb": 0, 00:14:24.483 "state": "online", 00:14:24.483 "raid_level": "raid1", 00:14:24.483 "superblock": true, 00:14:24.483 "num_base_bdevs": 2, 00:14:24.483 "num_base_bdevs_discovered": 2, 00:14:24.483 "num_base_bdevs_operational": 2, 00:14:24.483 "base_bdevs_list": [ 00:14:24.483 { 00:14:24.483 "name": "spare", 00:14:24.483 "uuid": "22e37688-44ea-53e0-9bef-34906e788b72", 00:14:24.483 "is_configured": true, 00:14:24.483 "data_offset": 2048, 00:14:24.483 "data_size": 63488 00:14:24.483 }, 00:14:24.483 { 00:14:24.483 "name": "BaseBdev2", 00:14:24.483 "uuid": "daabe8e0-570f-5841-bfbb-0eb25641701a", 00:14:24.483 "is_configured": true, 00:14:24.483 "data_offset": 2048, 00:14:24.483 "data_size": 63488 00:14:24.483 } 00:14:24.483 ] 00:14:24.483 }' 00:14:24.483 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:24.743 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:24.743 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:24.743 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:24.743 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.743 16:30:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.743 16:30:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.743 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:24.743 16:30:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.743 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:24.743 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:24.743 16:30:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.743 16:30:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.743 [2024-12-06 16:30:06.441024] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:24.743 16:30:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.743 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:24.743 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:24.743 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:24.743 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:24.743 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:24.743 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:24.743 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.743 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.743 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.743 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.743 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.743 16:30:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.743 16:30:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.743 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.743 16:30:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.743 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.743 "name": "raid_bdev1", 00:14:24.743 "uuid": "7abe9696-fcfe-4b69-a7f9-4f324d1b5994", 00:14:24.743 "strip_size_kb": 0, 00:14:24.743 "state": "online", 00:14:24.743 "raid_level": "raid1", 00:14:24.743 "superblock": true, 00:14:24.743 "num_base_bdevs": 2, 00:14:24.743 "num_base_bdevs_discovered": 1, 00:14:24.743 "num_base_bdevs_operational": 1, 00:14:24.743 "base_bdevs_list": [ 00:14:24.743 { 00:14:24.743 "name": null, 00:14:24.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.743 "is_configured": false, 00:14:24.743 "data_offset": 0, 00:14:24.743 "data_size": 63488 00:14:24.743 }, 00:14:24.743 { 00:14:24.743 "name": "BaseBdev2", 00:14:24.743 "uuid": "daabe8e0-570f-5841-bfbb-0eb25641701a", 00:14:24.743 "is_configured": true, 00:14:24.743 "data_offset": 2048, 00:14:24.743 "data_size": 63488 00:14:24.743 } 00:14:24.743 ] 00:14:24.743 }' 00:14:24.743 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.743 16:30:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.313 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:25.313 16:30:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.313 16:30:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.313 [2024-12-06 16:30:06.896361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:25.313 [2024-12-06 16:30:06.896581] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:25.313 [2024-12-06 16:30:06.896599] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:25.313 [2024-12-06 16:30:06.896643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:25.313 [2024-12-06 16:30:06.901687] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1a10 00:14:25.313 16:30:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.313 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:25.313 [2024-12-06 16:30:06.903963] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:26.252 16:30:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:26.252 16:30:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:26.252 16:30:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:26.252 16:30:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:26.252 16:30:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:26.252 16:30:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.252 16:30:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.252 16:30:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.252 16:30:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.252 16:30:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.252 16:30:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:26.252 "name": "raid_bdev1", 00:14:26.252 "uuid": "7abe9696-fcfe-4b69-a7f9-4f324d1b5994", 00:14:26.252 "strip_size_kb": 0, 00:14:26.252 "state": "online", 00:14:26.252 "raid_level": "raid1", 00:14:26.252 "superblock": true, 00:14:26.252 "num_base_bdevs": 2, 00:14:26.252 "num_base_bdevs_discovered": 2, 00:14:26.252 "num_base_bdevs_operational": 2, 00:14:26.252 "process": { 00:14:26.252 "type": "rebuild", 00:14:26.252 "target": "spare", 00:14:26.252 "progress": { 00:14:26.252 "blocks": 20480, 00:14:26.252 "percent": 32 00:14:26.252 } 00:14:26.252 }, 00:14:26.252 "base_bdevs_list": [ 00:14:26.252 { 00:14:26.252 "name": "spare", 00:14:26.252 "uuid": "22e37688-44ea-53e0-9bef-34906e788b72", 00:14:26.252 "is_configured": true, 00:14:26.252 "data_offset": 2048, 00:14:26.252 "data_size": 63488 00:14:26.252 }, 00:14:26.252 { 00:14:26.252 "name": "BaseBdev2", 00:14:26.252 "uuid": "daabe8e0-570f-5841-bfbb-0eb25641701a", 00:14:26.252 "is_configured": true, 00:14:26.252 "data_offset": 2048, 00:14:26.252 "data_size": 63488 00:14:26.252 } 00:14:26.252 ] 00:14:26.252 }' 00:14:26.252 16:30:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:26.252 16:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:26.252 16:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:26.252 16:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:26.252 16:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:26.252 16:30:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.252 16:30:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.252 [2024-12-06 16:30:08.060300] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:26.513 [2024-12-06 16:30:08.109209] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:26.513 [2024-12-06 16:30:08.109374] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.513 [2024-12-06 16:30:08.109426] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:26.513 [2024-12-06 16:30:08.109454] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:26.513 16:30:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.513 16:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:26.513 16:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:26.513 16:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.513 16:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:26.513 16:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:26.513 16:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:26.513 16:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.513 16:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.513 16:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.513 16:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.513 16:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.513 16:30:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.513 16:30:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.513 16:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.513 16:30:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.513 16:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.513 "name": "raid_bdev1", 00:14:26.513 "uuid": "7abe9696-fcfe-4b69-a7f9-4f324d1b5994", 00:14:26.513 "strip_size_kb": 0, 00:14:26.513 "state": "online", 00:14:26.513 "raid_level": "raid1", 00:14:26.513 "superblock": true, 00:14:26.513 "num_base_bdevs": 2, 00:14:26.513 "num_base_bdevs_discovered": 1, 00:14:26.513 "num_base_bdevs_operational": 1, 00:14:26.513 "base_bdevs_list": [ 00:14:26.513 { 00:14:26.513 "name": null, 00:14:26.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.513 "is_configured": false, 00:14:26.513 "data_offset": 0, 00:14:26.513 "data_size": 63488 00:14:26.513 }, 00:14:26.513 { 00:14:26.513 "name": "BaseBdev2", 00:14:26.513 "uuid": "daabe8e0-570f-5841-bfbb-0eb25641701a", 00:14:26.513 "is_configured": true, 00:14:26.513 "data_offset": 2048, 00:14:26.513 "data_size": 63488 00:14:26.513 } 00:14:26.513 ] 00:14:26.513 }' 00:14:26.513 16:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.513 16:30:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.773 16:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:26.773 16:30:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.773 16:30:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.773 [2024-12-06 16:30:08.573675] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:26.773 [2024-12-06 16:30:08.573817] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:26.773 [2024-12-06 16:30:08.573878] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:26.773 [2024-12-06 16:30:08.573914] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:26.773 [2024-12-06 16:30:08.574481] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:26.773 [2024-12-06 16:30:08.574556] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:26.773 [2024-12-06 16:30:08.574690] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:26.773 [2024-12-06 16:30:08.574734] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:26.773 [2024-12-06 16:30:08.574804] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:26.773 [2024-12-06 16:30:08.574865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:26.773 [2024-12-06 16:30:08.579974] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:14:26.773 spare 00:14:26.773 16:30:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.773 16:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:26.773 [2024-12-06 16:30:08.582245] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:28.149 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:28.149 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.149 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:28.149 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:28.149 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.149 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.149 16:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.149 16:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.149 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.149 16:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.149 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.149 "name": "raid_bdev1", 00:14:28.149 "uuid": "7abe9696-fcfe-4b69-a7f9-4f324d1b5994", 00:14:28.149 "strip_size_kb": 0, 00:14:28.149 "state": "online", 00:14:28.149 "raid_level": "raid1", 00:14:28.149 "superblock": true, 00:14:28.149 "num_base_bdevs": 2, 00:14:28.149 "num_base_bdevs_discovered": 2, 00:14:28.149 "num_base_bdevs_operational": 2, 00:14:28.149 "process": { 00:14:28.149 "type": "rebuild", 00:14:28.149 "target": "spare", 00:14:28.149 "progress": { 00:14:28.149 "blocks": 20480, 00:14:28.149 "percent": 32 00:14:28.149 } 00:14:28.149 }, 00:14:28.149 "base_bdevs_list": [ 00:14:28.149 { 00:14:28.149 "name": "spare", 00:14:28.149 "uuid": "22e37688-44ea-53e0-9bef-34906e788b72", 00:14:28.149 "is_configured": true, 00:14:28.149 "data_offset": 2048, 00:14:28.149 "data_size": 63488 00:14:28.149 }, 00:14:28.149 { 00:14:28.149 "name": "BaseBdev2", 00:14:28.149 "uuid": "daabe8e0-570f-5841-bfbb-0eb25641701a", 00:14:28.149 "is_configured": true, 00:14:28.149 "data_offset": 2048, 00:14:28.149 "data_size": 63488 00:14:28.149 } 00:14:28.149 ] 00:14:28.149 }' 00:14:28.149 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.149 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:28.149 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.149 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:28.149 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:28.149 16:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.149 16:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.149 [2024-12-06 16:30:09.730097] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:28.149 [2024-12-06 16:30:09.787668] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:28.149 [2024-12-06 16:30:09.787889] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:28.149 [2024-12-06 16:30:09.787936] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:28.149 [2024-12-06 16:30:09.787966] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:28.149 16:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.149 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:28.150 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:28.150 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:28.150 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:28.150 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:28.150 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:28.150 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.150 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.150 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.150 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.150 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.150 16:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.150 16:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.150 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.150 16:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.150 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.150 "name": "raid_bdev1", 00:14:28.150 "uuid": "7abe9696-fcfe-4b69-a7f9-4f324d1b5994", 00:14:28.150 "strip_size_kb": 0, 00:14:28.150 "state": "online", 00:14:28.150 "raid_level": "raid1", 00:14:28.150 "superblock": true, 00:14:28.150 "num_base_bdevs": 2, 00:14:28.150 "num_base_bdevs_discovered": 1, 00:14:28.150 "num_base_bdevs_operational": 1, 00:14:28.150 "base_bdevs_list": [ 00:14:28.150 { 00:14:28.150 "name": null, 00:14:28.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.150 "is_configured": false, 00:14:28.150 "data_offset": 0, 00:14:28.150 "data_size": 63488 00:14:28.150 }, 00:14:28.150 { 00:14:28.150 "name": "BaseBdev2", 00:14:28.150 "uuid": "daabe8e0-570f-5841-bfbb-0eb25641701a", 00:14:28.150 "is_configured": true, 00:14:28.150 "data_offset": 2048, 00:14:28.150 "data_size": 63488 00:14:28.150 } 00:14:28.150 ] 00:14:28.150 }' 00:14:28.150 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.150 16:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.717 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:28.717 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.717 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:28.717 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:28.717 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.717 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.717 16:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.717 16:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.717 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.717 16:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.717 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.717 "name": "raid_bdev1", 00:14:28.717 "uuid": "7abe9696-fcfe-4b69-a7f9-4f324d1b5994", 00:14:28.717 "strip_size_kb": 0, 00:14:28.717 "state": "online", 00:14:28.717 "raid_level": "raid1", 00:14:28.717 "superblock": true, 00:14:28.717 "num_base_bdevs": 2, 00:14:28.717 "num_base_bdevs_discovered": 1, 00:14:28.717 "num_base_bdevs_operational": 1, 00:14:28.717 "base_bdevs_list": [ 00:14:28.717 { 00:14:28.717 "name": null, 00:14:28.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.717 "is_configured": false, 00:14:28.717 "data_offset": 0, 00:14:28.717 "data_size": 63488 00:14:28.717 }, 00:14:28.717 { 00:14:28.717 "name": "BaseBdev2", 00:14:28.717 "uuid": "daabe8e0-570f-5841-bfbb-0eb25641701a", 00:14:28.717 "is_configured": true, 00:14:28.717 "data_offset": 2048, 00:14:28.717 "data_size": 63488 00:14:28.717 } 00:14:28.717 ] 00:14:28.717 }' 00:14:28.718 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.718 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:28.718 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.718 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:28.718 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:28.718 16:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.718 16:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.718 16:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.718 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:28.718 16:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.718 16:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.718 [2024-12-06 16:30:10.427973] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:28.718 [2024-12-06 16:30:10.428048] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.718 [2024-12-06 16:30:10.428071] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:28.718 [2024-12-06 16:30:10.428083] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.718 [2024-12-06 16:30:10.428552] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.718 [2024-12-06 16:30:10.428577] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:28.718 [2024-12-06 16:30:10.428660] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:28.718 [2024-12-06 16:30:10.428682] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:28.718 [2024-12-06 16:30:10.428703] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:28.718 [2024-12-06 16:30:10.428722] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:28.718 BaseBdev1 00:14:28.718 16:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.718 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:29.655 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:29.655 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:29.655 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.655 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:29.655 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:29.655 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:29.655 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.655 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.655 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.655 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.655 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.655 16:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.655 16:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.655 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.655 16:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.655 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.655 "name": "raid_bdev1", 00:14:29.655 "uuid": "7abe9696-fcfe-4b69-a7f9-4f324d1b5994", 00:14:29.655 "strip_size_kb": 0, 00:14:29.655 "state": "online", 00:14:29.655 "raid_level": "raid1", 00:14:29.655 "superblock": true, 00:14:29.655 "num_base_bdevs": 2, 00:14:29.655 "num_base_bdevs_discovered": 1, 00:14:29.655 "num_base_bdevs_operational": 1, 00:14:29.655 "base_bdevs_list": [ 00:14:29.655 { 00:14:29.655 "name": null, 00:14:29.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.655 "is_configured": false, 00:14:29.655 "data_offset": 0, 00:14:29.655 "data_size": 63488 00:14:29.655 }, 00:14:29.655 { 00:14:29.655 "name": "BaseBdev2", 00:14:29.655 "uuid": "daabe8e0-570f-5841-bfbb-0eb25641701a", 00:14:29.655 "is_configured": true, 00:14:29.655 "data_offset": 2048, 00:14:29.655 "data_size": 63488 00:14:29.655 } 00:14:29.655 ] 00:14:29.655 }' 00:14:29.655 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.655 16:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.222 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:30.222 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.222 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:30.222 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:30.222 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.222 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.222 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.222 16:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.222 16:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.222 16:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.222 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.222 "name": "raid_bdev1", 00:14:30.222 "uuid": "7abe9696-fcfe-4b69-a7f9-4f324d1b5994", 00:14:30.222 "strip_size_kb": 0, 00:14:30.222 "state": "online", 00:14:30.222 "raid_level": "raid1", 00:14:30.222 "superblock": true, 00:14:30.222 "num_base_bdevs": 2, 00:14:30.222 "num_base_bdevs_discovered": 1, 00:14:30.222 "num_base_bdevs_operational": 1, 00:14:30.222 "base_bdevs_list": [ 00:14:30.222 { 00:14:30.222 "name": null, 00:14:30.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.222 "is_configured": false, 00:14:30.222 "data_offset": 0, 00:14:30.222 "data_size": 63488 00:14:30.222 }, 00:14:30.222 { 00:14:30.222 "name": "BaseBdev2", 00:14:30.222 "uuid": "daabe8e0-570f-5841-bfbb-0eb25641701a", 00:14:30.222 "is_configured": true, 00:14:30.222 "data_offset": 2048, 00:14:30.222 "data_size": 63488 00:14:30.222 } 00:14:30.222 ] 00:14:30.222 }' 00:14:30.222 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.222 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:30.222 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.222 16:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:30.222 16:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:30.222 16:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:14:30.222 16:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:30.222 16:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:30.222 16:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:30.222 16:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:30.222 16:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:30.222 16:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:30.222 16:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.222 16:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.222 [2024-12-06 16:30:12.049358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:30.222 [2024-12-06 16:30:12.049609] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:30.222 [2024-12-06 16:30:12.049679] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:30.222 request: 00:14:30.222 { 00:14:30.222 "base_bdev": "BaseBdev1", 00:14:30.222 "raid_bdev": "raid_bdev1", 00:14:30.222 "method": "bdev_raid_add_base_bdev", 00:14:30.222 "req_id": 1 00:14:30.222 } 00:14:30.222 Got JSON-RPC error response 00:14:30.222 response: 00:14:30.222 { 00:14:30.222 "code": -22, 00:14:30.222 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:30.222 } 00:14:30.222 16:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:30.222 16:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:14:30.222 16:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:30.222 16:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:30.222 16:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:30.481 16:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:31.416 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:31.416 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.416 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.416 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:31.416 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:31.416 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:31.416 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.416 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.416 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.416 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.416 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.416 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.416 16:30:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.416 16:30:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.416 16:30:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.416 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.416 "name": "raid_bdev1", 00:14:31.416 "uuid": "7abe9696-fcfe-4b69-a7f9-4f324d1b5994", 00:14:31.416 "strip_size_kb": 0, 00:14:31.416 "state": "online", 00:14:31.416 "raid_level": "raid1", 00:14:31.416 "superblock": true, 00:14:31.416 "num_base_bdevs": 2, 00:14:31.416 "num_base_bdevs_discovered": 1, 00:14:31.416 "num_base_bdevs_operational": 1, 00:14:31.416 "base_bdevs_list": [ 00:14:31.416 { 00:14:31.416 "name": null, 00:14:31.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.416 "is_configured": false, 00:14:31.416 "data_offset": 0, 00:14:31.416 "data_size": 63488 00:14:31.416 }, 00:14:31.416 { 00:14:31.416 "name": "BaseBdev2", 00:14:31.416 "uuid": "daabe8e0-570f-5841-bfbb-0eb25641701a", 00:14:31.416 "is_configured": true, 00:14:31.416 "data_offset": 2048, 00:14:31.416 "data_size": 63488 00:14:31.416 } 00:14:31.416 ] 00:14:31.416 }' 00:14:31.416 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.416 16:30:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.981 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:31.981 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:31.981 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:31.981 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:31.981 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:31.981 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.981 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.981 16:30:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.981 16:30:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.981 16:30:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.981 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:31.981 "name": "raid_bdev1", 00:14:31.981 "uuid": "7abe9696-fcfe-4b69-a7f9-4f324d1b5994", 00:14:31.981 "strip_size_kb": 0, 00:14:31.981 "state": "online", 00:14:31.981 "raid_level": "raid1", 00:14:31.981 "superblock": true, 00:14:31.981 "num_base_bdevs": 2, 00:14:31.981 "num_base_bdevs_discovered": 1, 00:14:31.981 "num_base_bdevs_operational": 1, 00:14:31.981 "base_bdevs_list": [ 00:14:31.981 { 00:14:31.981 "name": null, 00:14:31.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.981 "is_configured": false, 00:14:31.981 "data_offset": 0, 00:14:31.981 "data_size": 63488 00:14:31.981 }, 00:14:31.981 { 00:14:31.981 "name": "BaseBdev2", 00:14:31.981 "uuid": "daabe8e0-570f-5841-bfbb-0eb25641701a", 00:14:31.981 "is_configured": true, 00:14:31.981 "data_offset": 2048, 00:14:31.981 "data_size": 63488 00:14:31.981 } 00:14:31.981 ] 00:14:31.981 }' 00:14:31.981 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.981 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:31.981 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.981 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:31.981 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 86839 00:14:31.981 16:30:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 86839 ']' 00:14:31.981 16:30:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 86839 00:14:31.981 16:30:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:31.981 16:30:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:31.981 16:30:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86839 00:14:31.981 16:30:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:31.981 16:30:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:31.981 16:30:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86839' 00:14:31.981 killing process with pid 86839 00:14:31.981 Received shutdown signal, test time was about 60.000000 seconds 00:14:31.981 00:14:31.981 Latency(us) 00:14:31.981 [2024-12-06T16:30:13.820Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:31.981 [2024-12-06T16:30:13.820Z] =================================================================================================================== 00:14:31.981 [2024-12-06T16:30:13.820Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:31.981 16:30:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 86839 00:14:31.981 [2024-12-06 16:30:13.693327] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:31.981 [2024-12-06 16:30:13.693487] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:31.981 16:30:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 86839 00:14:31.981 [2024-12-06 16:30:13.693553] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:31.981 [2024-12-06 16:30:13.693565] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:14:31.981 [2024-12-06 16:30:13.726650] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:32.240 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:32.240 00:14:32.240 real 0m22.092s 00:14:32.240 user 0m27.557s 00:14:32.240 sys 0m3.566s 00:14:32.240 ************************************ 00:14:32.240 END TEST raid_rebuild_test_sb 00:14:32.240 ************************************ 00:14:32.240 16:30:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:32.240 16:30:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.240 16:30:13 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:14:32.240 16:30:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:32.240 16:30:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:32.240 16:30:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:32.240 ************************************ 00:14:32.240 START TEST raid_rebuild_test_io 00:14:32.240 ************************************ 00:14:32.240 16:30:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:14:32.240 16:30:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:32.240 16:30:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:32.240 16:30:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:32.240 16:30:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:32.240 16:30:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:32.240 16:30:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:32.240 16:30:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:32.240 16:30:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:32.240 16:30:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:32.240 16:30:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:32.240 16:30:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:32.240 16:30:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:32.240 16:30:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:32.240 16:30:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:32.240 16:30:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:32.240 16:30:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:32.240 16:30:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:32.240 16:30:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:32.240 16:30:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:32.240 16:30:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:32.240 16:30:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:32.240 16:30:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:32.240 16:30:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:32.240 16:30:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=87554 00:14:32.240 16:30:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:32.240 16:30:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 87554 00:14:32.240 16:30:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 87554 ']' 00:14:32.240 16:30:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.240 16:30:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:32.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:32.240 16:30:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.240 16:30:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:32.240 16:30:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.499 [2024-12-06 16:30:14.110628] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:14:32.499 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:32.499 Zero copy mechanism will not be used. 00:14:32.499 [2024-12-06 16:30:14.110884] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87554 ] 00:14:32.499 [2024-12-06 16:30:14.285726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.499 [2024-12-06 16:30:14.316232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.757 [2024-12-06 16:30:14.361105] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:32.757 [2024-12-06 16:30:14.361156] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:33.325 16:30:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:33.325 16:30:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:14:33.325 16:30:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:33.325 16:30:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:33.325 16:30:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.325 16:30:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.325 BaseBdev1_malloc 00:14:33.325 16:30:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.325 16:30:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:33.325 16:30:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.325 16:30:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.325 [2024-12-06 16:30:15.050672] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:33.325 [2024-12-06 16:30:15.050801] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.325 [2024-12-06 16:30:15.050866] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:33.325 [2024-12-06 16:30:15.050907] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.325 [2024-12-06 16:30:15.053487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.325 [2024-12-06 16:30:15.053577] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:33.325 BaseBdev1 00:14:33.325 16:30:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.325 16:30:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:33.325 16:30:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:33.325 16:30:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.325 16:30:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.325 BaseBdev2_malloc 00:14:33.325 16:30:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.325 16:30:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:33.325 16:30:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.325 16:30:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.325 [2024-12-06 16:30:15.080051] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:33.325 [2024-12-06 16:30:15.080173] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.325 [2024-12-06 16:30:15.080240] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:33.325 [2024-12-06 16:30:15.080278] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.325 [2024-12-06 16:30:15.082789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.325 [2024-12-06 16:30:15.082874] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:33.325 BaseBdev2 00:14:33.325 16:30:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.325 16:30:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:33.325 16:30:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.325 16:30:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.325 spare_malloc 00:14:33.325 16:30:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.325 16:30:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:33.325 16:30:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.325 16:30:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.325 spare_delay 00:14:33.325 16:30:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.325 16:30:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:33.325 16:30:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.325 16:30:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.325 [2024-12-06 16:30:15.121424] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:33.325 [2024-12-06 16:30:15.121491] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.325 [2024-12-06 16:30:15.121518] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:33.325 [2024-12-06 16:30:15.121529] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.325 [2024-12-06 16:30:15.124056] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.325 [2024-12-06 16:30:15.124098] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:33.325 spare 00:14:33.325 16:30:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.325 16:30:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:33.325 16:30:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.325 16:30:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.325 [2024-12-06 16:30:15.133433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:33.325 [2024-12-06 16:30:15.135607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:33.325 [2024-12-06 16:30:15.135775] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:14:33.325 [2024-12-06 16:30:15.135801] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:33.325 [2024-12-06 16:30:15.136120] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:14:33.325 [2024-12-06 16:30:15.136323] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:14:33.325 [2024-12-06 16:30:15.136339] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:14:33.325 [2024-12-06 16:30:15.136495] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:33.325 16:30:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.325 16:30:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:33.325 16:30:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:33.325 16:30:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.325 16:30:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:33.325 16:30:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:33.325 16:30:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:33.325 16:30:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.325 16:30:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.325 16:30:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.325 16:30:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.325 16:30:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.325 16:30:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.325 16:30:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.325 16:30:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.584 16:30:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.584 16:30:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.584 "name": "raid_bdev1", 00:14:33.584 "uuid": "10ced68a-1e6d-4f53-a036-1265e2708900", 00:14:33.584 "strip_size_kb": 0, 00:14:33.584 "state": "online", 00:14:33.584 "raid_level": "raid1", 00:14:33.584 "superblock": false, 00:14:33.584 "num_base_bdevs": 2, 00:14:33.584 "num_base_bdevs_discovered": 2, 00:14:33.584 "num_base_bdevs_operational": 2, 00:14:33.584 "base_bdevs_list": [ 00:14:33.584 { 00:14:33.584 "name": "BaseBdev1", 00:14:33.584 "uuid": "3864495e-8931-5673-aa66-25f8de888ab4", 00:14:33.584 "is_configured": true, 00:14:33.584 "data_offset": 0, 00:14:33.584 "data_size": 65536 00:14:33.584 }, 00:14:33.584 { 00:14:33.584 "name": "BaseBdev2", 00:14:33.584 "uuid": "5b306f3f-0811-5f75-9d09-29b4120e6b60", 00:14:33.584 "is_configured": true, 00:14:33.584 "data_offset": 0, 00:14:33.584 "data_size": 65536 00:14:33.584 } 00:14:33.584 ] 00:14:33.584 }' 00:14:33.584 16:30:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.584 16:30:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.840 16:30:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:33.840 16:30:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.840 16:30:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.840 16:30:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:33.840 [2024-12-06 16:30:15.608988] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:33.840 16:30:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.840 16:30:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:33.840 16:30:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:33.840 16:30:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.841 16:30:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.841 16:30:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.841 16:30:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.098 16:30:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:34.098 16:30:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:34.098 16:30:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:34.098 16:30:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.098 16:30:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.098 16:30:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:34.098 [2024-12-06 16:30:15.708484] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:34.098 16:30:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.098 16:30:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:34.098 16:30:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:34.098 16:30:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:34.098 16:30:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:34.098 16:30:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:34.098 16:30:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:34.098 16:30:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.098 16:30:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.098 16:30:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.098 16:30:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.098 16:30:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.098 16:30:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.098 16:30:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.098 16:30:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.099 16:30:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.099 16:30:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.099 "name": "raid_bdev1", 00:14:34.099 "uuid": "10ced68a-1e6d-4f53-a036-1265e2708900", 00:14:34.099 "strip_size_kb": 0, 00:14:34.099 "state": "online", 00:14:34.099 "raid_level": "raid1", 00:14:34.099 "superblock": false, 00:14:34.099 "num_base_bdevs": 2, 00:14:34.099 "num_base_bdevs_discovered": 1, 00:14:34.099 "num_base_bdevs_operational": 1, 00:14:34.099 "base_bdevs_list": [ 00:14:34.099 { 00:14:34.099 "name": null, 00:14:34.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.099 "is_configured": false, 00:14:34.099 "data_offset": 0, 00:14:34.099 "data_size": 65536 00:14:34.099 }, 00:14:34.099 { 00:14:34.099 "name": "BaseBdev2", 00:14:34.099 "uuid": "5b306f3f-0811-5f75-9d09-29b4120e6b60", 00:14:34.099 "is_configured": true, 00:14:34.099 "data_offset": 0, 00:14:34.099 "data_size": 65536 00:14:34.099 } 00:14:34.099 ] 00:14:34.099 }' 00:14:34.099 16:30:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.099 16:30:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.099 [2024-12-06 16:30:15.829583] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:34.099 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:34.099 Zero copy mechanism will not be used. 00:14:34.099 Running I/O for 60 seconds... 00:14:34.357 16:30:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:34.357 16:30:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.357 16:30:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.357 [2024-12-06 16:30:16.172513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:34.617 16:30:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.617 16:30:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:34.617 [2024-12-06 16:30:16.239023] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:34.617 [2024-12-06 16:30:16.241317] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:34.906 [2024-12-06 16:30:16.488673] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:34.906 [2024-12-06 16:30:16.489036] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:35.163 [2024-12-06 16:30:16.815413] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:35.163 [2024-12-06 16:30:16.815950] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:35.421 164.00 IOPS, 492.00 MiB/s [2024-12-06T16:30:17.260Z] [2024-12-06 16:30:17.040852] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:35.421 [2024-12-06 16:30:17.041233] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:35.421 16:30:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:35.421 16:30:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:35.421 16:30:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:35.421 16:30:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:35.421 16:30:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:35.421 16:30:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.421 16:30:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.421 16:30:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.421 16:30:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.421 16:30:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.679 16:30:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:35.679 "name": "raid_bdev1", 00:14:35.679 "uuid": "10ced68a-1e6d-4f53-a036-1265e2708900", 00:14:35.679 "strip_size_kb": 0, 00:14:35.679 "state": "online", 00:14:35.679 "raid_level": "raid1", 00:14:35.679 "superblock": false, 00:14:35.679 "num_base_bdevs": 2, 00:14:35.679 "num_base_bdevs_discovered": 2, 00:14:35.679 "num_base_bdevs_operational": 2, 00:14:35.679 "process": { 00:14:35.679 "type": "rebuild", 00:14:35.679 "target": "spare", 00:14:35.679 "progress": { 00:14:35.679 "blocks": 10240, 00:14:35.679 "percent": 15 00:14:35.679 } 00:14:35.679 }, 00:14:35.679 "base_bdevs_list": [ 00:14:35.679 { 00:14:35.679 "name": "spare", 00:14:35.679 "uuid": "26f43128-fca4-55e1-a56b-63438e73bbb4", 00:14:35.679 "is_configured": true, 00:14:35.679 "data_offset": 0, 00:14:35.679 "data_size": 65536 00:14:35.679 }, 00:14:35.679 { 00:14:35.679 "name": "BaseBdev2", 00:14:35.679 "uuid": "5b306f3f-0811-5f75-9d09-29b4120e6b60", 00:14:35.679 "is_configured": true, 00:14:35.679 "data_offset": 0, 00:14:35.679 "data_size": 65536 00:14:35.679 } 00:14:35.679 ] 00:14:35.679 }' 00:14:35.679 16:30:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:35.679 16:30:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:35.679 16:30:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:35.679 16:30:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:35.679 [2024-12-06 16:30:17.386160] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:35.679 16:30:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:35.679 16:30:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.679 16:30:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.679 [2024-12-06 16:30:17.397274] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:35.937 [2024-12-06 16:30:17.568675] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:35.937 [2024-12-06 16:30:17.578710] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:35.937 [2024-12-06 16:30:17.578855] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:35.937 [2024-12-06 16:30:17.578881] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:35.937 [2024-12-06 16:30:17.600029] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:14:35.937 16:30:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.937 16:30:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:35.937 16:30:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:35.937 16:30:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:35.937 16:30:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:35.937 16:30:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:35.937 16:30:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:35.937 16:30:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.937 16:30:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.937 16:30:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.937 16:30:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.937 16:30:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.937 16:30:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.937 16:30:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.937 16:30:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.937 16:30:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.937 16:30:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.937 "name": "raid_bdev1", 00:14:35.937 "uuid": "10ced68a-1e6d-4f53-a036-1265e2708900", 00:14:35.937 "strip_size_kb": 0, 00:14:35.937 "state": "online", 00:14:35.937 "raid_level": "raid1", 00:14:35.937 "superblock": false, 00:14:35.937 "num_base_bdevs": 2, 00:14:35.937 "num_base_bdevs_discovered": 1, 00:14:35.937 "num_base_bdevs_operational": 1, 00:14:35.937 "base_bdevs_list": [ 00:14:35.937 { 00:14:35.937 "name": null, 00:14:35.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.937 "is_configured": false, 00:14:35.937 "data_offset": 0, 00:14:35.937 "data_size": 65536 00:14:35.937 }, 00:14:35.937 { 00:14:35.937 "name": "BaseBdev2", 00:14:35.937 "uuid": "5b306f3f-0811-5f75-9d09-29b4120e6b60", 00:14:35.937 "is_configured": true, 00:14:35.937 "data_offset": 0, 00:14:35.937 "data_size": 65536 00:14:35.937 } 00:14:35.937 ] 00:14:35.937 }' 00:14:35.937 16:30:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.937 16:30:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.452 141.00 IOPS, 423.00 MiB/s [2024-12-06T16:30:18.291Z] 16:30:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:36.452 16:30:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:36.452 16:30:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:36.452 16:30:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:36.452 16:30:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:36.452 16:30:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.452 16:30:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.452 16:30:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.452 16:30:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.452 16:30:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.452 16:30:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:36.452 "name": "raid_bdev1", 00:14:36.452 "uuid": "10ced68a-1e6d-4f53-a036-1265e2708900", 00:14:36.452 "strip_size_kb": 0, 00:14:36.452 "state": "online", 00:14:36.452 "raid_level": "raid1", 00:14:36.452 "superblock": false, 00:14:36.452 "num_base_bdevs": 2, 00:14:36.452 "num_base_bdevs_discovered": 1, 00:14:36.452 "num_base_bdevs_operational": 1, 00:14:36.452 "base_bdevs_list": [ 00:14:36.452 { 00:14:36.452 "name": null, 00:14:36.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.452 "is_configured": false, 00:14:36.452 "data_offset": 0, 00:14:36.452 "data_size": 65536 00:14:36.452 }, 00:14:36.452 { 00:14:36.452 "name": "BaseBdev2", 00:14:36.452 "uuid": "5b306f3f-0811-5f75-9d09-29b4120e6b60", 00:14:36.452 "is_configured": true, 00:14:36.452 "data_offset": 0, 00:14:36.452 "data_size": 65536 00:14:36.452 } 00:14:36.452 ] 00:14:36.452 }' 00:14:36.452 16:30:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:36.452 16:30:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:36.452 16:30:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:36.452 16:30:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:36.453 16:30:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:36.453 16:30:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.453 16:30:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.453 [2024-12-06 16:30:18.247219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:36.453 16:30:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.453 16:30:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:36.710 [2024-12-06 16:30:18.321719] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:36.710 [2024-12-06 16:30:18.324000] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:36.710 [2024-12-06 16:30:18.446989] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:36.710 [2024-12-06 16:30:18.447557] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:36.969 [2024-12-06 16:30:18.676799] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:36.969 [2024-12-06 16:30:18.677149] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:37.228 154.67 IOPS, 464.00 MiB/s [2024-12-06T16:30:19.067Z] [2024-12-06 16:30:18.905486] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:37.228 [2024-12-06 16:30:18.906123] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:37.228 [2024-12-06 16:30:19.040209] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:37.487 16:30:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:37.487 16:30:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.487 16:30:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:37.487 16:30:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:37.487 16:30:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.487 16:30:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.487 16:30:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.487 16:30:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.487 16:30:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.746 16:30:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.746 16:30:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.746 "name": "raid_bdev1", 00:14:37.746 "uuid": "10ced68a-1e6d-4f53-a036-1265e2708900", 00:14:37.746 "strip_size_kb": 0, 00:14:37.746 "state": "online", 00:14:37.746 "raid_level": "raid1", 00:14:37.746 "superblock": false, 00:14:37.746 "num_base_bdevs": 2, 00:14:37.746 "num_base_bdevs_discovered": 2, 00:14:37.746 "num_base_bdevs_operational": 2, 00:14:37.746 "process": { 00:14:37.746 "type": "rebuild", 00:14:37.746 "target": "spare", 00:14:37.746 "progress": { 00:14:37.746 "blocks": 12288, 00:14:37.746 "percent": 18 00:14:37.746 } 00:14:37.746 }, 00:14:37.746 "base_bdevs_list": [ 00:14:37.746 { 00:14:37.746 "name": "spare", 00:14:37.746 "uuid": "26f43128-fca4-55e1-a56b-63438e73bbb4", 00:14:37.746 "is_configured": true, 00:14:37.746 "data_offset": 0, 00:14:37.746 "data_size": 65536 00:14:37.746 }, 00:14:37.746 { 00:14:37.746 "name": "BaseBdev2", 00:14:37.746 "uuid": "5b306f3f-0811-5f75-9d09-29b4120e6b60", 00:14:37.746 "is_configured": true, 00:14:37.746 "data_offset": 0, 00:14:37.746 "data_size": 65536 00:14:37.746 } 00:14:37.746 ] 00:14:37.746 }' 00:14:37.746 16:30:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.746 16:30:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:37.746 16:30:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.747 16:30:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:37.747 16:30:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:37.747 16:30:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:37.747 16:30:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:37.747 16:30:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:37.747 16:30:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=329 00:14:37.747 16:30:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:37.747 16:30:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:37.747 16:30:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.747 16:30:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:37.747 16:30:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:37.747 16:30:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.747 [2024-12-06 16:30:19.434188] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:37.747 [2024-12-06 16:30:19.434524] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:37.747 16:30:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.747 16:30:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.747 16:30:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.747 16:30:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.747 16:30:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.747 16:30:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.747 "name": "raid_bdev1", 00:14:37.747 "uuid": "10ced68a-1e6d-4f53-a036-1265e2708900", 00:14:37.747 "strip_size_kb": 0, 00:14:37.747 "state": "online", 00:14:37.747 "raid_level": "raid1", 00:14:37.747 "superblock": false, 00:14:37.747 "num_base_bdevs": 2, 00:14:37.747 "num_base_bdevs_discovered": 2, 00:14:37.747 "num_base_bdevs_operational": 2, 00:14:37.747 "process": { 00:14:37.747 "type": "rebuild", 00:14:37.747 "target": "spare", 00:14:37.747 "progress": { 00:14:37.747 "blocks": 16384, 00:14:37.747 "percent": 25 00:14:37.747 } 00:14:37.747 }, 00:14:37.747 "base_bdevs_list": [ 00:14:37.747 { 00:14:37.747 "name": "spare", 00:14:37.747 "uuid": "26f43128-fca4-55e1-a56b-63438e73bbb4", 00:14:37.747 "is_configured": true, 00:14:37.747 "data_offset": 0, 00:14:37.747 "data_size": 65536 00:14:37.747 }, 00:14:37.747 { 00:14:37.747 "name": "BaseBdev2", 00:14:37.747 "uuid": "5b306f3f-0811-5f75-9d09-29b4120e6b60", 00:14:37.747 "is_configured": true, 00:14:37.747 "data_offset": 0, 00:14:37.747 "data_size": 65536 00:14:37.747 } 00:14:37.747 ] 00:14:37.747 }' 00:14:37.747 16:30:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.747 16:30:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:37.747 16:30:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.747 16:30:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:37.747 16:30:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:38.596 142.25 IOPS, 426.75 MiB/s [2024-12-06T16:30:20.435Z] [2024-12-06 16:30:20.107375] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:38.596 [2024-12-06 16:30:20.310988] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:38.854 [2024-12-06 16:30:20.551417] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:38.854 16:30:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:38.854 16:30:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:38.854 16:30:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:38.854 16:30:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:38.854 16:30:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:38.854 16:30:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:38.854 16:30:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.854 16:30:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.854 16:30:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.854 16:30:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.854 16:30:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.854 16:30:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:38.854 "name": "raid_bdev1", 00:14:38.854 "uuid": "10ced68a-1e6d-4f53-a036-1265e2708900", 00:14:38.854 "strip_size_kb": 0, 00:14:38.854 "state": "online", 00:14:38.854 "raid_level": "raid1", 00:14:38.854 "superblock": false, 00:14:38.854 "num_base_bdevs": 2, 00:14:38.854 "num_base_bdevs_discovered": 2, 00:14:38.855 "num_base_bdevs_operational": 2, 00:14:38.855 "process": { 00:14:38.855 "type": "rebuild", 00:14:38.855 "target": "spare", 00:14:38.855 "progress": { 00:14:38.855 "blocks": 32768, 00:14:38.855 "percent": 50 00:14:38.855 } 00:14:38.855 }, 00:14:38.855 "base_bdevs_list": [ 00:14:38.855 { 00:14:38.855 "name": "spare", 00:14:38.855 "uuid": "26f43128-fca4-55e1-a56b-63438e73bbb4", 00:14:38.855 "is_configured": true, 00:14:38.855 "data_offset": 0, 00:14:38.855 "data_size": 65536 00:14:38.855 }, 00:14:38.855 { 00:14:38.855 "name": "BaseBdev2", 00:14:38.855 "uuid": "5b306f3f-0811-5f75-9d09-29b4120e6b60", 00:14:38.855 "is_configured": true, 00:14:38.855 "data_offset": 0, 00:14:38.855 "data_size": 65536 00:14:38.855 } 00:14:38.855 ] 00:14:38.855 }' 00:14:38.855 16:30:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:38.855 16:30:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:38.855 16:30:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.113 16:30:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:39.113 16:30:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:39.113 [2024-12-06 16:30:20.776764] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:39.372 123.80 IOPS, 371.40 MiB/s [2024-12-06T16:30:21.211Z] [2024-12-06 16:30:20.999178] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:39.372 [2024-12-06 16:30:20.999791] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:39.630 [2024-12-06 16:30:21.208919] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:39.630 [2024-12-06 16:30:21.209331] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:39.889 [2024-12-06 16:30:21.522949] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:14:39.889 [2024-12-06 16:30:21.641404] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:39.889 16:30:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:39.889 16:30:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:39.889 16:30:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.889 16:30:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:39.889 16:30:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:39.889 16:30:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.148 16:30:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.148 16:30:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.148 16:30:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.148 16:30:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.148 16:30:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.148 16:30:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.148 "name": "raid_bdev1", 00:14:40.148 "uuid": "10ced68a-1e6d-4f53-a036-1265e2708900", 00:14:40.148 "strip_size_kb": 0, 00:14:40.148 "state": "online", 00:14:40.148 "raid_level": "raid1", 00:14:40.148 "superblock": false, 00:14:40.148 "num_base_bdevs": 2, 00:14:40.148 "num_base_bdevs_discovered": 2, 00:14:40.148 "num_base_bdevs_operational": 2, 00:14:40.148 "process": { 00:14:40.149 "type": "rebuild", 00:14:40.149 "target": "spare", 00:14:40.149 "progress": { 00:14:40.149 "blocks": 47104, 00:14:40.149 "percent": 71 00:14:40.149 } 00:14:40.149 }, 00:14:40.149 "base_bdevs_list": [ 00:14:40.149 { 00:14:40.149 "name": "spare", 00:14:40.149 "uuid": "26f43128-fca4-55e1-a56b-63438e73bbb4", 00:14:40.149 "is_configured": true, 00:14:40.149 "data_offset": 0, 00:14:40.149 "data_size": 65536 00:14:40.149 }, 00:14:40.149 { 00:14:40.149 "name": "BaseBdev2", 00:14:40.149 "uuid": "5b306f3f-0811-5f75-9d09-29b4120e6b60", 00:14:40.149 "is_configured": true, 00:14:40.149 "data_offset": 0, 00:14:40.149 "data_size": 65536 00:14:40.149 } 00:14:40.149 ] 00:14:40.149 }' 00:14:40.149 16:30:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.149 16:30:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:40.149 16:30:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.149 109.33 IOPS, 328.00 MiB/s [2024-12-06T16:30:21.988Z] 16:30:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:40.149 16:30:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:40.717 [2024-12-06 16:30:22.287134] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:14:40.718 [2024-12-06 16:30:22.523017] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:14:41.286 99.86 IOPS, 299.57 MiB/s [2024-12-06T16:30:23.125Z] [2024-12-06 16:30:22.858112] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:41.286 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:41.286 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:41.286 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:41.286 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:41.286 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:41.286 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:41.286 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.286 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.286 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.286 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.286 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.286 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:41.286 "name": "raid_bdev1", 00:14:41.286 "uuid": "10ced68a-1e6d-4f53-a036-1265e2708900", 00:14:41.286 "strip_size_kb": 0, 00:14:41.286 "state": "online", 00:14:41.286 "raid_level": "raid1", 00:14:41.286 "superblock": false, 00:14:41.286 "num_base_bdevs": 2, 00:14:41.286 "num_base_bdevs_discovered": 2, 00:14:41.286 "num_base_bdevs_operational": 2, 00:14:41.286 "process": { 00:14:41.286 "type": "rebuild", 00:14:41.286 "target": "spare", 00:14:41.286 "progress": { 00:14:41.286 "blocks": 65536, 00:14:41.286 "percent": 100 00:14:41.286 } 00:14:41.286 }, 00:14:41.286 "base_bdevs_list": [ 00:14:41.286 { 00:14:41.286 "name": "spare", 00:14:41.286 "uuid": "26f43128-fca4-55e1-a56b-63438e73bbb4", 00:14:41.286 "is_configured": true, 00:14:41.286 "data_offset": 0, 00:14:41.286 "data_size": 65536 00:14:41.286 }, 00:14:41.286 { 00:14:41.286 "name": "BaseBdev2", 00:14:41.286 "uuid": "5b306f3f-0811-5f75-9d09-29b4120e6b60", 00:14:41.286 "is_configured": true, 00:14:41.286 "data_offset": 0, 00:14:41.286 "data_size": 65536 00:14:41.286 } 00:14:41.286 ] 00:14:41.286 }' 00:14:41.286 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:41.286 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:41.286 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:41.286 [2024-12-06 16:30:22.965008] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:41.286 [2024-12-06 16:30:22.967787] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:41.286 16:30:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:41.286 16:30:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:42.225 90.88 IOPS, 272.62 MiB/s [2024-12-06T16:30:24.064Z] 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:42.225 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:42.225 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.225 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:42.225 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:42.225 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.225 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.225 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.225 16:30:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.225 16:30:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.225 16:30:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.484 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:42.484 "name": "raid_bdev1", 00:14:42.484 "uuid": "10ced68a-1e6d-4f53-a036-1265e2708900", 00:14:42.484 "strip_size_kb": 0, 00:14:42.484 "state": "online", 00:14:42.484 "raid_level": "raid1", 00:14:42.484 "superblock": false, 00:14:42.484 "num_base_bdevs": 2, 00:14:42.484 "num_base_bdevs_discovered": 2, 00:14:42.484 "num_base_bdevs_operational": 2, 00:14:42.484 "base_bdevs_list": [ 00:14:42.484 { 00:14:42.484 "name": "spare", 00:14:42.484 "uuid": "26f43128-fca4-55e1-a56b-63438e73bbb4", 00:14:42.484 "is_configured": true, 00:14:42.484 "data_offset": 0, 00:14:42.484 "data_size": 65536 00:14:42.484 }, 00:14:42.484 { 00:14:42.484 "name": "BaseBdev2", 00:14:42.484 "uuid": "5b306f3f-0811-5f75-9d09-29b4120e6b60", 00:14:42.484 "is_configured": true, 00:14:42.484 "data_offset": 0, 00:14:42.484 "data_size": 65536 00:14:42.484 } 00:14:42.484 ] 00:14:42.484 }' 00:14:42.484 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.484 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:42.484 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:42.484 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:42.484 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:14:42.484 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:42.484 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.484 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:42.484 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:42.484 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.484 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.484 16:30:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.484 16:30:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.484 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.484 16:30:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.484 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:42.484 "name": "raid_bdev1", 00:14:42.484 "uuid": "10ced68a-1e6d-4f53-a036-1265e2708900", 00:14:42.484 "strip_size_kb": 0, 00:14:42.484 "state": "online", 00:14:42.484 "raid_level": "raid1", 00:14:42.484 "superblock": false, 00:14:42.484 "num_base_bdevs": 2, 00:14:42.484 "num_base_bdevs_discovered": 2, 00:14:42.484 "num_base_bdevs_operational": 2, 00:14:42.484 "base_bdevs_list": [ 00:14:42.484 { 00:14:42.484 "name": "spare", 00:14:42.484 "uuid": "26f43128-fca4-55e1-a56b-63438e73bbb4", 00:14:42.484 "is_configured": true, 00:14:42.484 "data_offset": 0, 00:14:42.484 "data_size": 65536 00:14:42.484 }, 00:14:42.484 { 00:14:42.484 "name": "BaseBdev2", 00:14:42.484 "uuid": "5b306f3f-0811-5f75-9d09-29b4120e6b60", 00:14:42.484 "is_configured": true, 00:14:42.484 "data_offset": 0, 00:14:42.484 "data_size": 65536 00:14:42.484 } 00:14:42.484 ] 00:14:42.484 }' 00:14:42.484 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.484 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:42.484 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:42.484 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:42.484 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:42.484 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:42.484 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:42.484 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:42.484 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:42.484 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:42.484 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.484 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.484 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.484 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.484 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.484 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.484 16:30:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.484 16:30:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.484 16:30:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.744 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.744 "name": "raid_bdev1", 00:14:42.744 "uuid": "10ced68a-1e6d-4f53-a036-1265e2708900", 00:14:42.744 "strip_size_kb": 0, 00:14:42.744 "state": "online", 00:14:42.744 "raid_level": "raid1", 00:14:42.744 "superblock": false, 00:14:42.744 "num_base_bdevs": 2, 00:14:42.744 "num_base_bdevs_discovered": 2, 00:14:42.744 "num_base_bdevs_operational": 2, 00:14:42.744 "base_bdevs_list": [ 00:14:42.744 { 00:14:42.744 "name": "spare", 00:14:42.744 "uuid": "26f43128-fca4-55e1-a56b-63438e73bbb4", 00:14:42.744 "is_configured": true, 00:14:42.744 "data_offset": 0, 00:14:42.744 "data_size": 65536 00:14:42.744 }, 00:14:42.744 { 00:14:42.744 "name": "BaseBdev2", 00:14:42.744 "uuid": "5b306f3f-0811-5f75-9d09-29b4120e6b60", 00:14:42.744 "is_configured": true, 00:14:42.744 "data_offset": 0, 00:14:42.744 "data_size": 65536 00:14:42.744 } 00:14:42.744 ] 00:14:42.744 }' 00:14:42.744 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.744 16:30:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.004 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:43.004 16:30:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.004 16:30:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.004 [2024-12-06 16:30:24.704658] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:43.004 [2024-12-06 16:30:24.704695] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:43.004 00:14:43.004 Latency(us) 00:14:43.004 [2024-12-06T16:30:24.843Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.004 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:43.004 raid_bdev1 : 9.00 84.46 253.39 0.00 0.00 16394.73 321.96 110352.32 00:14:43.004 [2024-12-06T16:30:24.843Z] =================================================================================================================== 00:14:43.004 [2024-12-06T16:30:24.843Z] Total : 84.46 253.39 0.00 0.00 16394.73 321.96 110352.32 00:14:43.004 [2024-12-06 16:30:24.817359] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:43.004 { 00:14:43.004 "results": [ 00:14:43.004 { 00:14:43.004 "job": "raid_bdev1", 00:14:43.004 "core_mask": "0x1", 00:14:43.004 "workload": "randrw", 00:14:43.004 "percentage": 50, 00:14:43.004 "status": "finished", 00:14:43.004 "queue_depth": 2, 00:14:43.004 "io_size": 3145728, 00:14:43.004 "runtime": 8.997831, 00:14:43.004 "iops": 84.46480046135564, 00:14:43.004 "mibps": 253.3944013840669, 00:14:43.004 "io_failed": 0, 00:14:43.004 "io_timeout": 0, 00:14:43.004 "avg_latency_us": 16394.734231211216, 00:14:43.004 "min_latency_us": 321.95633187772927, 00:14:43.004 "max_latency_us": 110352.32139737991 00:14:43.005 } 00:14:43.005 ], 00:14:43.005 "core_count": 1 00:14:43.005 } 00:14:43.005 [2024-12-06 16:30:24.817495] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:43.005 [2024-12-06 16:30:24.817587] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:43.005 [2024-12-06 16:30:24.817602] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:14:43.005 16:30:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.005 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.005 16:30:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.005 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:43.005 16:30:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.005 16:30:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.263 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:43.263 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:43.263 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:43.263 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:43.263 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:43.263 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:43.263 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:43.263 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:43.263 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:43.263 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:43.263 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:43.263 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:43.263 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:43.521 /dev/nbd0 00:14:43.521 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:43.521 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:43.521 16:30:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:43.521 16:30:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:43.521 16:30:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:43.521 16:30:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:43.521 16:30:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:43.521 16:30:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:43.521 16:30:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:43.521 16:30:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:43.521 16:30:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:43.521 1+0 records in 00:14:43.521 1+0 records out 00:14:43.521 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252186 s, 16.2 MB/s 00:14:43.521 16:30:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:43.521 16:30:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:43.521 16:30:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:43.521 16:30:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:43.521 16:30:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:43.521 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:43.521 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:43.521 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:43.521 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:14:43.521 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:14:43.521 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:43.521 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:14:43.521 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:43.521 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:43.521 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:43.521 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:43.521 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:43.521 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:43.521 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:14:43.779 /dev/nbd1 00:14:43.779 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:43.779 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:43.779 16:30:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:43.779 16:30:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:43.779 16:30:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:43.779 16:30:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:43.779 16:30:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:43.779 16:30:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:43.779 16:30:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:43.779 16:30:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:43.779 16:30:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:43.779 1+0 records in 00:14:43.779 1+0 records out 00:14:43.779 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311082 s, 13.2 MB/s 00:14:43.779 16:30:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:43.779 16:30:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:43.779 16:30:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:43.779 16:30:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:43.779 16:30:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:43.779 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:43.779 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:43.779 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:43.779 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:43.779 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:43.780 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:43.780 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:43.780 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:43.780 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:43.780 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:44.112 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:44.112 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:44.112 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:44.112 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:44.112 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:44.112 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:44.112 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:44.112 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:44.112 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:44.112 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:44.112 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:44.112 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:44.112 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:44.112 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:44.112 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:44.372 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:44.372 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:44.372 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:44.372 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:44.372 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:44.372 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:44.372 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:44.372 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:44.372 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:44.372 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 87554 00:14:44.372 16:30:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 87554 ']' 00:14:44.372 16:30:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 87554 00:14:44.372 16:30:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:14:44.372 16:30:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:44.372 16:30:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87554 00:14:44.372 killing process with pid 87554 00:14:44.372 Received shutdown signal, test time was about 10.310106 seconds 00:14:44.372 00:14:44.372 Latency(us) 00:14:44.372 [2024-12-06T16:30:26.211Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.372 [2024-12-06T16:30:26.211Z] =================================================================================================================== 00:14:44.372 [2024-12-06T16:30:26.211Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:44.372 16:30:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:44.372 16:30:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:44.372 16:30:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87554' 00:14:44.372 16:30:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 87554 00:14:44.372 [2024-12-06 16:30:26.122637] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:44.372 16:30:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 87554 00:14:44.372 [2024-12-06 16:30:26.149539] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:44.633 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:44.633 00:14:44.633 real 0m12.357s 00:14:44.633 user 0m15.975s 00:14:44.633 sys 0m1.408s 00:14:44.633 ************************************ 00:14:44.633 END TEST raid_rebuild_test_io 00:14:44.633 ************************************ 00:14:44.633 16:30:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:44.633 16:30:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.633 16:30:26 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:14:44.633 16:30:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:44.633 16:30:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:44.633 16:30:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:44.633 ************************************ 00:14:44.633 START TEST raid_rebuild_test_sb_io 00:14:44.633 ************************************ 00:14:44.633 16:30:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:14:44.633 16:30:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:44.633 16:30:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:44.633 16:30:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:44.633 16:30:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:44.633 16:30:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:44.633 16:30:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:44.633 16:30:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:44.633 16:30:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:44.633 16:30:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:44.633 16:30:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:44.633 16:30:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:44.633 16:30:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:44.633 16:30:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:44.634 16:30:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:44.634 16:30:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:44.634 16:30:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:44.634 16:30:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:44.634 16:30:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:44.634 16:30:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:44.634 16:30:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:44.634 16:30:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:44.634 16:30:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:44.634 16:30:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:44.634 16:30:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:44.634 16:30:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=87945 00:14:44.634 16:30:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 87945 00:14:44.634 16:30:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:44.634 16:30:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 87945 ']' 00:14:44.634 16:30:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.634 16:30:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:44.634 16:30:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.634 16:30:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:44.634 16:30:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.893 [2024-12-06 16:30:26.525471] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:14:44.893 [2024-12-06 16:30:26.525715] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87945 ] 00:14:44.893 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:44.893 Zero copy mechanism will not be used. 00:14:44.893 [2024-12-06 16:30:26.703062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.152 [2024-12-06 16:30:26.733001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.152 [2024-12-06 16:30:26.779022] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:45.152 [2024-12-06 16:30:26.779061] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:45.722 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:45.722 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:14:45.722 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:45.722 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:45.722 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.722 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.722 BaseBdev1_malloc 00:14:45.722 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.722 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:45.722 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.722 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.722 [2024-12-06 16:30:27.461158] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:45.722 [2024-12-06 16:30:27.461314] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.722 [2024-12-06 16:30:27.461378] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:45.722 [2024-12-06 16:30:27.461420] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.722 [2024-12-06 16:30:27.464000] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.722 [2024-12-06 16:30:27.464087] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:45.722 BaseBdev1 00:14:45.722 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.722 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:45.722 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:45.722 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.722 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.722 BaseBdev2_malloc 00:14:45.722 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.722 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:45.722 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.722 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.722 [2024-12-06 16:30:27.490596] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:45.722 [2024-12-06 16:30:27.490733] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.722 [2024-12-06 16:30:27.490793] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:45.722 [2024-12-06 16:30:27.490836] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.722 [2024-12-06 16:30:27.493595] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.722 [2024-12-06 16:30:27.493679] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:45.722 BaseBdev2 00:14:45.722 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.722 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:45.722 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.722 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.722 spare_malloc 00:14:45.722 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.722 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:45.722 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.722 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.722 spare_delay 00:14:45.722 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.722 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:45.722 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.722 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.722 [2024-12-06 16:30:27.532026] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:45.722 [2024-12-06 16:30:27.532092] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.722 [2024-12-06 16:30:27.532119] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:45.722 [2024-12-06 16:30:27.532129] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.722 [2024-12-06 16:30:27.534617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.722 [2024-12-06 16:30:27.534660] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:45.722 spare 00:14:45.722 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.722 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:45.722 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.722 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.723 [2024-12-06 16:30:27.544057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:45.723 [2024-12-06 16:30:27.546242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:45.723 [2024-12-06 16:30:27.546420] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:14:45.723 [2024-12-06 16:30:27.546436] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:45.723 [2024-12-06 16:30:27.546747] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:14:45.723 [2024-12-06 16:30:27.546903] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:14:45.723 [2024-12-06 16:30:27.546918] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:14:45.723 [2024-12-06 16:30:27.547059] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:45.723 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.723 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:45.723 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:45.723 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:45.723 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:45.723 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:45.723 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:45.723 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.723 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.723 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.723 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.723 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.723 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.723 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.723 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.982 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.982 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.982 "name": "raid_bdev1", 00:14:45.982 "uuid": "745fe1e7-c836-4f54-aab9-3dbf18fc6f29", 00:14:45.982 "strip_size_kb": 0, 00:14:45.982 "state": "online", 00:14:45.982 "raid_level": "raid1", 00:14:45.982 "superblock": true, 00:14:45.982 "num_base_bdevs": 2, 00:14:45.982 "num_base_bdevs_discovered": 2, 00:14:45.982 "num_base_bdevs_operational": 2, 00:14:45.982 "base_bdevs_list": [ 00:14:45.982 { 00:14:45.982 "name": "BaseBdev1", 00:14:45.982 "uuid": "cb8e806a-f538-52ac-8bf2-48fed378e9af", 00:14:45.982 "is_configured": true, 00:14:45.982 "data_offset": 2048, 00:14:45.982 "data_size": 63488 00:14:45.982 }, 00:14:45.982 { 00:14:45.982 "name": "BaseBdev2", 00:14:45.982 "uuid": "88477d23-b365-5b24-8f6b-08b24cf1efca", 00:14:45.982 "is_configured": true, 00:14:45.982 "data_offset": 2048, 00:14:45.982 "data_size": 63488 00:14:45.982 } 00:14:45.982 ] 00:14:45.982 }' 00:14:45.982 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.982 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.241 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:46.241 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:46.241 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.241 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.241 [2024-12-06 16:30:27.975723] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:46.241 16:30:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.241 16:30:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:46.241 16:30:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.241 16:30:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.241 16:30:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:46.241 16:30:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.241 16:30:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.241 16:30:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:46.241 16:30:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:46.241 16:30:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:46.241 16:30:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:46.241 16:30:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.241 16:30:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.241 [2024-12-06 16:30:28.071390] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:46.241 16:30:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.241 16:30:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:46.241 16:30:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:46.241 16:30:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.241 16:30:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:46.241 16:30:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:46.241 16:30:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:46.241 16:30:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.241 16:30:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.241 16:30:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.241 16:30:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.501 16:30:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.501 16:30:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.501 16:30:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.501 16:30:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.501 16:30:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.501 16:30:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.501 "name": "raid_bdev1", 00:14:46.501 "uuid": "745fe1e7-c836-4f54-aab9-3dbf18fc6f29", 00:14:46.501 "strip_size_kb": 0, 00:14:46.501 "state": "online", 00:14:46.501 "raid_level": "raid1", 00:14:46.501 "superblock": true, 00:14:46.501 "num_base_bdevs": 2, 00:14:46.501 "num_base_bdevs_discovered": 1, 00:14:46.501 "num_base_bdevs_operational": 1, 00:14:46.501 "base_bdevs_list": [ 00:14:46.501 { 00:14:46.501 "name": null, 00:14:46.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.501 "is_configured": false, 00:14:46.501 "data_offset": 0, 00:14:46.501 "data_size": 63488 00:14:46.501 }, 00:14:46.501 { 00:14:46.501 "name": "BaseBdev2", 00:14:46.501 "uuid": "88477d23-b365-5b24-8f6b-08b24cf1efca", 00:14:46.501 "is_configured": true, 00:14:46.501 "data_offset": 2048, 00:14:46.501 "data_size": 63488 00:14:46.501 } 00:14:46.501 ] 00:14:46.501 }' 00:14:46.501 16:30:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.501 16:30:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.501 [2024-12-06 16:30:28.193641] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:46.501 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:46.501 Zero copy mechanism will not be used. 00:14:46.501 Running I/O for 60 seconds... 00:14:46.761 16:30:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:46.761 16:30:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.761 16:30:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.761 [2024-12-06 16:30:28.507183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:46.761 16:30:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.761 16:30:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:46.761 [2024-12-06 16:30:28.545457] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:46.761 [2024-12-06 16:30:28.547752] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:47.020 [2024-12-06 16:30:28.661789] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:47.020 [2024-12-06 16:30:28.662377] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:47.279 [2024-12-06 16:30:28.883256] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:47.279 [2024-12-06 16:30:28.883666] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:47.539 84.00 IOPS, 252.00 MiB/s [2024-12-06T16:30:29.378Z] [2024-12-06 16:30:29.221263] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:47.539 [2024-12-06 16:30:29.221747] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:47.798 [2024-12-06 16:30:29.444850] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:47.798 [2024-12-06 16:30:29.445133] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:47.798 16:30:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:47.798 16:30:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:47.798 16:30:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:47.798 16:30:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:47.798 16:30:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:47.798 16:30:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.798 16:30:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.798 16:30:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.798 16:30:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.798 16:30:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.798 16:30:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:47.798 "name": "raid_bdev1", 00:14:47.798 "uuid": "745fe1e7-c836-4f54-aab9-3dbf18fc6f29", 00:14:47.798 "strip_size_kb": 0, 00:14:47.798 "state": "online", 00:14:47.798 "raid_level": "raid1", 00:14:47.798 "superblock": true, 00:14:47.798 "num_base_bdevs": 2, 00:14:47.798 "num_base_bdevs_discovered": 2, 00:14:47.798 "num_base_bdevs_operational": 2, 00:14:47.798 "process": { 00:14:47.798 "type": "rebuild", 00:14:47.798 "target": "spare", 00:14:47.798 "progress": { 00:14:47.798 "blocks": 10240, 00:14:47.798 "percent": 16 00:14:47.798 } 00:14:47.798 }, 00:14:47.798 "base_bdevs_list": [ 00:14:47.798 { 00:14:47.798 "name": "spare", 00:14:47.798 "uuid": "eb687980-c6f8-5ca3-86bb-2c6f439362f7", 00:14:47.798 "is_configured": true, 00:14:47.798 "data_offset": 2048, 00:14:47.798 "data_size": 63488 00:14:47.798 }, 00:14:47.798 { 00:14:47.798 "name": "BaseBdev2", 00:14:47.798 "uuid": "88477d23-b365-5b24-8f6b-08b24cf1efca", 00:14:47.798 "is_configured": true, 00:14:47.798 "data_offset": 2048, 00:14:47.798 "data_size": 63488 00:14:47.798 } 00:14:47.798 ] 00:14:47.798 }' 00:14:47.798 16:30:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:48.057 16:30:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:48.057 16:30:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:48.058 16:30:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:48.058 16:30:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:48.058 16:30:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.058 16:30:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.058 [2024-12-06 16:30:29.681949] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:48.058 [2024-12-06 16:30:29.709840] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:48.058 [2024-12-06 16:30:29.724428] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:48.058 [2024-12-06 16:30:29.742392] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:48.058 [2024-12-06 16:30:29.742519] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:48.058 [2024-12-06 16:30:29.742570] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:48.058 [2024-12-06 16:30:29.757591] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:14:48.058 16:30:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.058 16:30:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:48.058 16:30:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:48.058 16:30:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:48.058 16:30:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:48.058 16:30:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:48.058 16:30:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:48.058 16:30:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.058 16:30:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.058 16:30:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.058 16:30:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.058 16:30:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.058 16:30:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.058 16:30:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.058 16:30:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.058 16:30:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.058 16:30:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.058 "name": "raid_bdev1", 00:14:48.058 "uuid": "745fe1e7-c836-4f54-aab9-3dbf18fc6f29", 00:14:48.058 "strip_size_kb": 0, 00:14:48.058 "state": "online", 00:14:48.058 "raid_level": "raid1", 00:14:48.058 "superblock": true, 00:14:48.058 "num_base_bdevs": 2, 00:14:48.058 "num_base_bdevs_discovered": 1, 00:14:48.058 "num_base_bdevs_operational": 1, 00:14:48.058 "base_bdevs_list": [ 00:14:48.058 { 00:14:48.058 "name": null, 00:14:48.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.058 "is_configured": false, 00:14:48.058 "data_offset": 0, 00:14:48.058 "data_size": 63488 00:14:48.058 }, 00:14:48.058 { 00:14:48.058 "name": "BaseBdev2", 00:14:48.058 "uuid": "88477d23-b365-5b24-8f6b-08b24cf1efca", 00:14:48.058 "is_configured": true, 00:14:48.058 "data_offset": 2048, 00:14:48.058 "data_size": 63488 00:14:48.058 } 00:14:48.058 ] 00:14:48.058 }' 00:14:48.058 16:30:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.058 16:30:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.627 119.50 IOPS, 358.50 MiB/s [2024-12-06T16:30:30.466Z] 16:30:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:48.627 16:30:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:48.627 16:30:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:48.627 16:30:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:48.627 16:30:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:48.627 16:30:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.627 16:30:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.627 16:30:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.627 16:30:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.627 16:30:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.627 16:30:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:48.627 "name": "raid_bdev1", 00:14:48.627 "uuid": "745fe1e7-c836-4f54-aab9-3dbf18fc6f29", 00:14:48.627 "strip_size_kb": 0, 00:14:48.627 "state": "online", 00:14:48.627 "raid_level": "raid1", 00:14:48.627 "superblock": true, 00:14:48.627 "num_base_bdevs": 2, 00:14:48.627 "num_base_bdevs_discovered": 1, 00:14:48.627 "num_base_bdevs_operational": 1, 00:14:48.627 "base_bdevs_list": [ 00:14:48.627 { 00:14:48.627 "name": null, 00:14:48.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.627 "is_configured": false, 00:14:48.627 "data_offset": 0, 00:14:48.627 "data_size": 63488 00:14:48.627 }, 00:14:48.627 { 00:14:48.627 "name": "BaseBdev2", 00:14:48.627 "uuid": "88477d23-b365-5b24-8f6b-08b24cf1efca", 00:14:48.627 "is_configured": true, 00:14:48.627 "data_offset": 2048, 00:14:48.627 "data_size": 63488 00:14:48.627 } 00:14:48.627 ] 00:14:48.627 }' 00:14:48.627 16:30:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:48.627 16:30:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:48.627 16:30:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:48.627 16:30:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:48.627 16:30:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:48.627 16:30:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.627 16:30:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.627 [2024-12-06 16:30:30.373507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:48.627 16:30:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.627 16:30:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:48.627 [2024-12-06 16:30:30.405131] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:48.627 [2024-12-06 16:30:30.407373] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:48.887 [2024-12-06 16:30:30.529639] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:48.887 [2024-12-06 16:30:30.530134] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:49.145 [2024-12-06 16:30:30.755431] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:49.145 [2024-12-06 16:30:30.755767] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:49.404 [2024-12-06 16:30:31.136557] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:49.404 [2024-12-06 16:30:31.137094] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:49.664 126.33 IOPS, 379.00 MiB/s [2024-12-06T16:30:31.503Z] [2024-12-06 16:30:31.354267] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:49.664 [2024-12-06 16:30:31.354686] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:49.664 16:30:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:49.664 16:30:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.664 16:30:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:49.664 16:30:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:49.664 16:30:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.664 16:30:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.664 16:30:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.664 16:30:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.664 16:30:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.664 16:30:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.664 16:30:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.664 "name": "raid_bdev1", 00:14:49.664 "uuid": "745fe1e7-c836-4f54-aab9-3dbf18fc6f29", 00:14:49.664 "strip_size_kb": 0, 00:14:49.664 "state": "online", 00:14:49.664 "raid_level": "raid1", 00:14:49.664 "superblock": true, 00:14:49.664 "num_base_bdevs": 2, 00:14:49.664 "num_base_bdevs_discovered": 2, 00:14:49.664 "num_base_bdevs_operational": 2, 00:14:49.664 "process": { 00:14:49.664 "type": "rebuild", 00:14:49.664 "target": "spare", 00:14:49.664 "progress": { 00:14:49.664 "blocks": 10240, 00:14:49.664 "percent": 16 00:14:49.664 } 00:14:49.664 }, 00:14:49.664 "base_bdevs_list": [ 00:14:49.664 { 00:14:49.664 "name": "spare", 00:14:49.664 "uuid": "eb687980-c6f8-5ca3-86bb-2c6f439362f7", 00:14:49.664 "is_configured": true, 00:14:49.664 "data_offset": 2048, 00:14:49.664 "data_size": 63488 00:14:49.664 }, 00:14:49.664 { 00:14:49.664 "name": "BaseBdev2", 00:14:49.664 "uuid": "88477d23-b365-5b24-8f6b-08b24cf1efca", 00:14:49.664 "is_configured": true, 00:14:49.664 "data_offset": 2048, 00:14:49.664 "data_size": 63488 00:14:49.664 } 00:14:49.664 ] 00:14:49.664 }' 00:14:49.664 16:30:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.664 16:30:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:49.664 16:30:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.923 16:30:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:49.923 16:30:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:49.923 16:30:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:49.923 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:49.923 16:30:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:49.923 16:30:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:49.923 16:30:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:49.923 16:30:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=341 00:14:49.923 16:30:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:49.923 16:30:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:49.923 16:30:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.923 16:30:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:49.923 16:30:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:49.923 16:30:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.923 16:30:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.923 16:30:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.923 16:30:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.923 16:30:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.923 16:30:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.923 16:30:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.923 "name": "raid_bdev1", 00:14:49.923 "uuid": "745fe1e7-c836-4f54-aab9-3dbf18fc6f29", 00:14:49.923 "strip_size_kb": 0, 00:14:49.923 "state": "online", 00:14:49.923 "raid_level": "raid1", 00:14:49.923 "superblock": true, 00:14:49.923 "num_base_bdevs": 2, 00:14:49.923 "num_base_bdevs_discovered": 2, 00:14:49.923 "num_base_bdevs_operational": 2, 00:14:49.923 "process": { 00:14:49.923 "type": "rebuild", 00:14:49.923 "target": "spare", 00:14:49.923 "progress": { 00:14:49.923 "blocks": 10240, 00:14:49.923 "percent": 16 00:14:49.923 } 00:14:49.923 }, 00:14:49.923 "base_bdevs_list": [ 00:14:49.923 { 00:14:49.923 "name": "spare", 00:14:49.923 "uuid": "eb687980-c6f8-5ca3-86bb-2c6f439362f7", 00:14:49.923 "is_configured": true, 00:14:49.923 "data_offset": 2048, 00:14:49.923 "data_size": 63488 00:14:49.923 }, 00:14:49.923 { 00:14:49.923 "name": "BaseBdev2", 00:14:49.923 "uuid": "88477d23-b365-5b24-8f6b-08b24cf1efca", 00:14:49.923 "is_configured": true, 00:14:49.923 "data_offset": 2048, 00:14:49.923 "data_size": 63488 00:14:49.923 } 00:14:49.923 ] 00:14:49.923 }' 00:14:49.923 16:30:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.923 16:30:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:49.923 16:30:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.923 [2024-12-06 16:30:31.685015] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:49.923 16:30:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:49.923 16:30:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:50.197 [2024-12-06 16:30:31.815945] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:50.197 [2024-12-06 16:30:31.816353] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:51.031 105.50 IOPS, 316.50 MiB/s [2024-12-06T16:30:32.870Z] [2024-12-06 16:30:32.661042] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:51.031 [2024-12-06 16:30:32.661284] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:51.031 16:30:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:51.031 16:30:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:51.031 16:30:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:51.031 16:30:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:51.031 16:30:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:51.031 16:30:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:51.031 16:30:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.031 16:30:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.031 16:30:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.031 16:30:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.031 16:30:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.031 16:30:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:51.031 "name": "raid_bdev1", 00:14:51.031 "uuid": "745fe1e7-c836-4f54-aab9-3dbf18fc6f29", 00:14:51.031 "strip_size_kb": 0, 00:14:51.031 "state": "online", 00:14:51.031 "raid_level": "raid1", 00:14:51.031 "superblock": true, 00:14:51.031 "num_base_bdevs": 2, 00:14:51.031 "num_base_bdevs_discovered": 2, 00:14:51.031 "num_base_bdevs_operational": 2, 00:14:51.031 "process": { 00:14:51.031 "type": "rebuild", 00:14:51.031 "target": "spare", 00:14:51.031 "progress": { 00:14:51.031 "blocks": 28672, 00:14:51.031 "percent": 45 00:14:51.031 } 00:14:51.031 }, 00:14:51.031 "base_bdevs_list": [ 00:14:51.031 { 00:14:51.031 "name": "spare", 00:14:51.031 "uuid": "eb687980-c6f8-5ca3-86bb-2c6f439362f7", 00:14:51.031 "is_configured": true, 00:14:51.031 "data_offset": 2048, 00:14:51.031 "data_size": 63488 00:14:51.031 }, 00:14:51.031 { 00:14:51.031 "name": "BaseBdev2", 00:14:51.031 "uuid": "88477d23-b365-5b24-8f6b-08b24cf1efca", 00:14:51.031 "is_configured": true, 00:14:51.031 "data_offset": 2048, 00:14:51.031 "data_size": 63488 00:14:51.031 } 00:14:51.031 ] 00:14:51.031 }' 00:14:51.031 16:30:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:51.031 16:30:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:51.031 16:30:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:51.031 16:30:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:51.031 16:30:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:51.600 96.60 IOPS, 289.80 MiB/s [2024-12-06T16:30:33.439Z] [2024-12-06 16:30:33.305291] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:51.860 [2024-12-06 16:30:33.513133] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:51.860 [2024-12-06 16:30:33.513517] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:52.119 16:30:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:52.119 16:30:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:52.119 16:30:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:52.119 16:30:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:52.119 16:30:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:52.119 16:30:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:52.119 16:30:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.119 16:30:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.119 16:30:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.119 16:30:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.119 16:30:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.119 16:30:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:52.119 "name": "raid_bdev1", 00:14:52.119 "uuid": "745fe1e7-c836-4f54-aab9-3dbf18fc6f29", 00:14:52.119 "strip_size_kb": 0, 00:14:52.119 "state": "online", 00:14:52.119 "raid_level": "raid1", 00:14:52.119 "superblock": true, 00:14:52.119 "num_base_bdevs": 2, 00:14:52.119 "num_base_bdevs_discovered": 2, 00:14:52.119 "num_base_bdevs_operational": 2, 00:14:52.119 "process": { 00:14:52.119 "type": "rebuild", 00:14:52.119 "target": "spare", 00:14:52.119 "progress": { 00:14:52.119 "blocks": 45056, 00:14:52.119 "percent": 70 00:14:52.119 } 00:14:52.119 }, 00:14:52.119 "base_bdevs_list": [ 00:14:52.119 { 00:14:52.119 "name": "spare", 00:14:52.119 "uuid": "eb687980-c6f8-5ca3-86bb-2c6f439362f7", 00:14:52.119 "is_configured": true, 00:14:52.119 "data_offset": 2048, 00:14:52.119 "data_size": 63488 00:14:52.119 }, 00:14:52.119 { 00:14:52.119 "name": "BaseBdev2", 00:14:52.119 "uuid": "88477d23-b365-5b24-8f6b-08b24cf1efca", 00:14:52.119 "is_configured": true, 00:14:52.119 "data_offset": 2048, 00:14:52.119 "data_size": 63488 00:14:52.119 } 00:14:52.119 ] 00:14:52.119 }' 00:14:52.119 16:30:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:52.119 16:30:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:52.119 16:30:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.379 [2024-12-06 16:30:33.973608] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:52.379 16:30:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:52.379 16:30:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:52.639 88.50 IOPS, 265.50 MiB/s [2024-12-06T16:30:34.478Z] [2024-12-06 16:30:34.300902] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:14:52.639 [2024-12-06 16:30:34.301343] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:14:52.639 [2024-12-06 16:30:34.418098] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:14:53.206 [2024-12-06 16:30:34.851356] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:14:53.206 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:53.206 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:53.206 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.206 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:53.206 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:53.206 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.206 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.206 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.206 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.206 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.206 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.465 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:53.465 "name": "raid_bdev1", 00:14:53.465 "uuid": "745fe1e7-c836-4f54-aab9-3dbf18fc6f29", 00:14:53.465 "strip_size_kb": 0, 00:14:53.465 "state": "online", 00:14:53.465 "raid_level": "raid1", 00:14:53.465 "superblock": true, 00:14:53.465 "num_base_bdevs": 2, 00:14:53.465 "num_base_bdevs_discovered": 2, 00:14:53.465 "num_base_bdevs_operational": 2, 00:14:53.465 "process": { 00:14:53.465 "type": "rebuild", 00:14:53.465 "target": "spare", 00:14:53.465 "progress": { 00:14:53.465 "blocks": 61440, 00:14:53.465 "percent": 96 00:14:53.465 } 00:14:53.465 }, 00:14:53.465 "base_bdevs_list": [ 00:14:53.465 { 00:14:53.465 "name": "spare", 00:14:53.465 "uuid": "eb687980-c6f8-5ca3-86bb-2c6f439362f7", 00:14:53.465 "is_configured": true, 00:14:53.465 "data_offset": 2048, 00:14:53.465 "data_size": 63488 00:14:53.465 }, 00:14:53.465 { 00:14:53.465 "name": "BaseBdev2", 00:14:53.465 "uuid": "88477d23-b365-5b24-8f6b-08b24cf1efca", 00:14:53.465 "is_configured": true, 00:14:53.465 "data_offset": 2048, 00:14:53.465 "data_size": 63488 00:14:53.465 } 00:14:53.465 ] 00:14:53.465 }' 00:14:53.465 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:53.465 [2024-12-06 16:30:35.077040] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:53.465 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:53.465 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.465 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:53.465 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:53.465 [2024-12-06 16:30:35.176855] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:53.465 [2024-12-06 16:30:35.178812] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:54.402 81.00 IOPS, 243.00 MiB/s [2024-12-06T16:30:36.241Z] 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:54.402 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:54.402 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:54.402 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:54.402 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:54.402 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:54.402 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.402 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.402 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.402 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.402 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.402 74.75 IOPS, 224.25 MiB/s [2024-12-06T16:30:36.241Z] 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:54.402 "name": "raid_bdev1", 00:14:54.402 "uuid": "745fe1e7-c836-4f54-aab9-3dbf18fc6f29", 00:14:54.402 "strip_size_kb": 0, 00:14:54.402 "state": "online", 00:14:54.402 "raid_level": "raid1", 00:14:54.402 "superblock": true, 00:14:54.402 "num_base_bdevs": 2, 00:14:54.402 "num_base_bdevs_discovered": 2, 00:14:54.402 "num_base_bdevs_operational": 2, 00:14:54.402 "base_bdevs_list": [ 00:14:54.402 { 00:14:54.402 "name": "spare", 00:14:54.402 "uuid": "eb687980-c6f8-5ca3-86bb-2c6f439362f7", 00:14:54.402 "is_configured": true, 00:14:54.402 "data_offset": 2048, 00:14:54.402 "data_size": 63488 00:14:54.402 }, 00:14:54.402 { 00:14:54.402 "name": "BaseBdev2", 00:14:54.403 "uuid": "88477d23-b365-5b24-8f6b-08b24cf1efca", 00:14:54.403 "is_configured": true, 00:14:54.403 "data_offset": 2048, 00:14:54.403 "data_size": 63488 00:14:54.403 } 00:14:54.403 ] 00:14:54.403 }' 00:14:54.403 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:54.661 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:54.661 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:54.661 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:54.661 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:14:54.662 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:54.662 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:54.662 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:54.662 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:54.662 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:54.662 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.662 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.662 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.662 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.662 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.662 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:54.662 "name": "raid_bdev1", 00:14:54.662 "uuid": "745fe1e7-c836-4f54-aab9-3dbf18fc6f29", 00:14:54.662 "strip_size_kb": 0, 00:14:54.662 "state": "online", 00:14:54.662 "raid_level": "raid1", 00:14:54.662 "superblock": true, 00:14:54.662 "num_base_bdevs": 2, 00:14:54.662 "num_base_bdevs_discovered": 2, 00:14:54.662 "num_base_bdevs_operational": 2, 00:14:54.662 "base_bdevs_list": [ 00:14:54.662 { 00:14:54.662 "name": "spare", 00:14:54.662 "uuid": "eb687980-c6f8-5ca3-86bb-2c6f439362f7", 00:14:54.662 "is_configured": true, 00:14:54.662 "data_offset": 2048, 00:14:54.662 "data_size": 63488 00:14:54.662 }, 00:14:54.662 { 00:14:54.662 "name": "BaseBdev2", 00:14:54.662 "uuid": "88477d23-b365-5b24-8f6b-08b24cf1efca", 00:14:54.662 "is_configured": true, 00:14:54.662 "data_offset": 2048, 00:14:54.662 "data_size": 63488 00:14:54.662 } 00:14:54.662 ] 00:14:54.662 }' 00:14:54.662 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:54.662 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:54.662 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:54.662 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:54.662 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:54.662 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:54.662 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:54.662 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:54.662 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:54.662 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:54.662 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.662 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.662 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.662 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.662 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.662 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.662 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.662 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.662 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.662 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.662 "name": "raid_bdev1", 00:14:54.662 "uuid": "745fe1e7-c836-4f54-aab9-3dbf18fc6f29", 00:14:54.662 "strip_size_kb": 0, 00:14:54.662 "state": "online", 00:14:54.662 "raid_level": "raid1", 00:14:54.662 "superblock": true, 00:14:54.662 "num_base_bdevs": 2, 00:14:54.662 "num_base_bdevs_discovered": 2, 00:14:54.662 "num_base_bdevs_operational": 2, 00:14:54.662 "base_bdevs_list": [ 00:14:54.662 { 00:14:54.662 "name": "spare", 00:14:54.662 "uuid": "eb687980-c6f8-5ca3-86bb-2c6f439362f7", 00:14:54.662 "is_configured": true, 00:14:54.662 "data_offset": 2048, 00:14:54.662 "data_size": 63488 00:14:54.662 }, 00:14:54.662 { 00:14:54.662 "name": "BaseBdev2", 00:14:54.662 "uuid": "88477d23-b365-5b24-8f6b-08b24cf1efca", 00:14:54.662 "is_configured": true, 00:14:54.662 "data_offset": 2048, 00:14:54.662 "data_size": 63488 00:14:54.662 } 00:14:54.662 ] 00:14:54.662 }' 00:14:54.662 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.662 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.230 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:55.230 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.230 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.230 [2024-12-06 16:30:36.863214] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:55.230 [2024-12-06 16:30:36.863249] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:55.230 00:14:55.230 Latency(us) 00:14:55.230 [2024-12-06T16:30:37.069Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.230 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:55.230 raid_bdev1 : 8.74 71.55 214.65 0.00 0.00 19235.17 307.65 116762.83 00:14:55.230 [2024-12-06T16:30:37.069Z] =================================================================================================================== 00:14:55.230 [2024-12-06T16:30:37.069Z] Total : 71.55 214.65 0.00 0.00 19235.17 307.65 116762.83 00:14:55.230 { 00:14:55.230 "results": [ 00:14:55.230 { 00:14:55.230 "job": "raid_bdev1", 00:14:55.230 "core_mask": "0x1", 00:14:55.230 "workload": "randrw", 00:14:55.230 "percentage": 50, 00:14:55.230 "status": "finished", 00:14:55.230 "queue_depth": 2, 00:14:55.230 "io_size": 3145728, 00:14:55.230 "runtime": 8.735225, 00:14:55.230 "iops": 71.54938768034023, 00:14:55.230 "mibps": 214.6481630410207, 00:14:55.230 "io_failed": 0, 00:14:55.230 "io_timeout": 0, 00:14:55.230 "avg_latency_us": 19235.17086742358, 00:14:55.230 "min_latency_us": 307.6471615720524, 00:14:55.230 "max_latency_us": 116762.82969432314 00:14:55.230 } 00:14:55.230 ], 00:14:55.230 "core_count": 1 00:14:55.230 } 00:14:55.230 [2024-12-06 16:30:36.918779] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:55.230 [2024-12-06 16:30:36.918834] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:55.230 [2024-12-06 16:30:36.918914] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:55.230 [2024-12-06 16:30:36.918928] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:14:55.230 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.230 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:55.230 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.230 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.230 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.230 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.230 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:55.230 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:55.230 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:55.230 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:55.230 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:55.230 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:55.230 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:55.230 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:55.230 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:55.230 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:55.230 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:55.230 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:55.230 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:55.488 /dev/nbd0 00:14:55.488 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:55.488 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:55.488 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:55.488 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:55.488 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:55.488 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:55.488 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:55.488 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:55.488 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:55.488 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:55.488 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:55.488 1+0 records in 00:14:55.488 1+0 records out 00:14:55.488 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384263 s, 10.7 MB/s 00:14:55.488 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:55.488 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:55.488 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:55.488 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:55.488 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:55.488 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:55.488 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:55.488 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:55.488 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:14:55.488 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:14:55.488 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:55.488 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:14:55.488 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:55.488 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:55.488 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:55.488 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:55.488 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:55.488 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:55.488 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:14:55.747 /dev/nbd1 00:14:55.747 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:55.747 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:55.747 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:55.747 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:55.747 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:55.747 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:55.747 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:55.747 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:55.747 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:55.747 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:55.747 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:55.747 1+0 records in 00:14:55.747 1+0 records out 00:14:55.747 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421473 s, 9.7 MB/s 00:14:55.747 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:55.747 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:55.747 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:55.747 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:55.747 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:55.747 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:55.747 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:55.747 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:56.004 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:56.004 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:56.004 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:56.004 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:56.004 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:56.004 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:56.004 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:56.004 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:56.004 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:56.004 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:56.004 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:56.004 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:56.004 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:56.004 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:56.004 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:56.004 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:56.004 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:56.004 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:56.004 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:56.004 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:56.004 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:56.004 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:56.261 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:56.261 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:56.261 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:56.261 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:56.261 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:56.261 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:56.261 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:56.261 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:56.261 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:56.261 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:56.261 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.261 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.261 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.261 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:56.261 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.261 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.261 [2024-12-06 16:30:38.078406] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:56.261 [2024-12-06 16:30:38.078470] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.261 [2024-12-06 16:30:38.078491] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:56.261 [2024-12-06 16:30:38.078507] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.261 [2024-12-06 16:30:38.081002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.261 spare 00:14:56.261 [2024-12-06 16:30:38.081104] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:56.261 [2024-12-06 16:30:38.081230] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:56.261 [2024-12-06 16:30:38.081276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:56.261 [2024-12-06 16:30:38.081426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:56.261 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.261 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:56.261 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.261 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.519 [2024-12-06 16:30:38.181353] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:14:56.519 [2024-12-06 16:30:38.181399] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:56.519 [2024-12-06 16:30:38.181723] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002af30 00:14:56.519 [2024-12-06 16:30:38.181888] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:14:56.519 [2024-12-06 16:30:38.181901] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:14:56.519 [2024-12-06 16:30:38.182066] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:56.519 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.519 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:56.519 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:56.519 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:56.519 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:56.519 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:56.519 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:56.519 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.519 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.519 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.519 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.519 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.519 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.519 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.519 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.519 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.519 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.519 "name": "raid_bdev1", 00:14:56.519 "uuid": "745fe1e7-c836-4f54-aab9-3dbf18fc6f29", 00:14:56.519 "strip_size_kb": 0, 00:14:56.519 "state": "online", 00:14:56.519 "raid_level": "raid1", 00:14:56.519 "superblock": true, 00:14:56.519 "num_base_bdevs": 2, 00:14:56.519 "num_base_bdevs_discovered": 2, 00:14:56.519 "num_base_bdevs_operational": 2, 00:14:56.519 "base_bdevs_list": [ 00:14:56.519 { 00:14:56.519 "name": "spare", 00:14:56.519 "uuid": "eb687980-c6f8-5ca3-86bb-2c6f439362f7", 00:14:56.519 "is_configured": true, 00:14:56.519 "data_offset": 2048, 00:14:56.519 "data_size": 63488 00:14:56.519 }, 00:14:56.519 { 00:14:56.519 "name": "BaseBdev2", 00:14:56.519 "uuid": "88477d23-b365-5b24-8f6b-08b24cf1efca", 00:14:56.519 "is_configured": true, 00:14:56.519 "data_offset": 2048, 00:14:56.519 "data_size": 63488 00:14:56.519 } 00:14:56.519 ] 00:14:56.519 }' 00:14:56.519 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.519 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.852 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:56.852 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:56.852 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:56.852 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:56.852 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:56.852 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.852 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.852 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.852 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.852 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.110 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:57.110 "name": "raid_bdev1", 00:14:57.110 "uuid": "745fe1e7-c836-4f54-aab9-3dbf18fc6f29", 00:14:57.110 "strip_size_kb": 0, 00:14:57.110 "state": "online", 00:14:57.110 "raid_level": "raid1", 00:14:57.110 "superblock": true, 00:14:57.110 "num_base_bdevs": 2, 00:14:57.110 "num_base_bdevs_discovered": 2, 00:14:57.110 "num_base_bdevs_operational": 2, 00:14:57.110 "base_bdevs_list": [ 00:14:57.110 { 00:14:57.110 "name": "spare", 00:14:57.110 "uuid": "eb687980-c6f8-5ca3-86bb-2c6f439362f7", 00:14:57.110 "is_configured": true, 00:14:57.110 "data_offset": 2048, 00:14:57.110 "data_size": 63488 00:14:57.110 }, 00:14:57.110 { 00:14:57.110 "name": "BaseBdev2", 00:14:57.110 "uuid": "88477d23-b365-5b24-8f6b-08b24cf1efca", 00:14:57.110 "is_configured": true, 00:14:57.110 "data_offset": 2048, 00:14:57.110 "data_size": 63488 00:14:57.110 } 00:14:57.110 ] 00:14:57.110 }' 00:14:57.110 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:57.110 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:57.110 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:57.110 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:57.110 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.110 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:57.110 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.110 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.110 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.110 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:57.110 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:57.110 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.110 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.110 [2024-12-06 16:30:38.857315] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:57.110 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.111 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:57.111 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:57.111 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:57.111 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:57.111 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:57.111 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:57.111 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.111 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.111 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.111 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.111 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.111 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.111 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.111 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.111 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.111 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.111 "name": "raid_bdev1", 00:14:57.111 "uuid": "745fe1e7-c836-4f54-aab9-3dbf18fc6f29", 00:14:57.111 "strip_size_kb": 0, 00:14:57.111 "state": "online", 00:14:57.111 "raid_level": "raid1", 00:14:57.111 "superblock": true, 00:14:57.111 "num_base_bdevs": 2, 00:14:57.111 "num_base_bdevs_discovered": 1, 00:14:57.111 "num_base_bdevs_operational": 1, 00:14:57.111 "base_bdevs_list": [ 00:14:57.111 { 00:14:57.111 "name": null, 00:14:57.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.111 "is_configured": false, 00:14:57.111 "data_offset": 0, 00:14:57.111 "data_size": 63488 00:14:57.111 }, 00:14:57.111 { 00:14:57.111 "name": "BaseBdev2", 00:14:57.111 "uuid": "88477d23-b365-5b24-8f6b-08b24cf1efca", 00:14:57.111 "is_configured": true, 00:14:57.111 "data_offset": 2048, 00:14:57.111 "data_size": 63488 00:14:57.111 } 00:14:57.111 ] 00:14:57.111 }' 00:14:57.111 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.111 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.678 16:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:57.678 16:30:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.678 16:30:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.678 [2024-12-06 16:30:39.328602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:57.678 [2024-12-06 16:30:39.328900] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:57.678 [2024-12-06 16:30:39.328969] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:57.678 [2024-12-06 16:30:39.329051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:57.678 [2024-12-06 16:30:39.334463] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b000 00:14:57.678 16:30:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.678 16:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:57.678 [2024-12-06 16:30:39.336633] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:58.611 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:58.611 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:58.611 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:58.611 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:58.611 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:58.611 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.611 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.611 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.611 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.611 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.611 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:58.611 "name": "raid_bdev1", 00:14:58.611 "uuid": "745fe1e7-c836-4f54-aab9-3dbf18fc6f29", 00:14:58.611 "strip_size_kb": 0, 00:14:58.611 "state": "online", 00:14:58.611 "raid_level": "raid1", 00:14:58.611 "superblock": true, 00:14:58.611 "num_base_bdevs": 2, 00:14:58.611 "num_base_bdevs_discovered": 2, 00:14:58.611 "num_base_bdevs_operational": 2, 00:14:58.611 "process": { 00:14:58.611 "type": "rebuild", 00:14:58.611 "target": "spare", 00:14:58.611 "progress": { 00:14:58.611 "blocks": 20480, 00:14:58.611 "percent": 32 00:14:58.611 } 00:14:58.611 }, 00:14:58.611 "base_bdevs_list": [ 00:14:58.611 { 00:14:58.611 "name": "spare", 00:14:58.611 "uuid": "eb687980-c6f8-5ca3-86bb-2c6f439362f7", 00:14:58.611 "is_configured": true, 00:14:58.611 "data_offset": 2048, 00:14:58.611 "data_size": 63488 00:14:58.611 }, 00:14:58.611 { 00:14:58.611 "name": "BaseBdev2", 00:14:58.611 "uuid": "88477d23-b365-5b24-8f6b-08b24cf1efca", 00:14:58.611 "is_configured": true, 00:14:58.611 "data_offset": 2048, 00:14:58.611 "data_size": 63488 00:14:58.611 } 00:14:58.611 ] 00:14:58.611 }' 00:14:58.611 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:58.611 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:58.611 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:58.870 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:58.870 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:58.870 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.870 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.870 [2024-12-06 16:30:40.480908] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:58.870 [2024-12-06 16:30:40.541772] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:58.870 [2024-12-06 16:30:40.541944] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.870 [2024-12-06 16:30:40.541971] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:58.870 [2024-12-06 16:30:40.541981] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:58.870 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.870 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:58.870 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:58.870 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.870 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:58.870 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:58.870 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:58.870 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.870 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.870 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.870 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.870 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.870 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.870 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.870 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.870 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.870 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.870 "name": "raid_bdev1", 00:14:58.870 "uuid": "745fe1e7-c836-4f54-aab9-3dbf18fc6f29", 00:14:58.870 "strip_size_kb": 0, 00:14:58.870 "state": "online", 00:14:58.870 "raid_level": "raid1", 00:14:58.870 "superblock": true, 00:14:58.870 "num_base_bdevs": 2, 00:14:58.870 "num_base_bdevs_discovered": 1, 00:14:58.870 "num_base_bdevs_operational": 1, 00:14:58.870 "base_bdevs_list": [ 00:14:58.870 { 00:14:58.870 "name": null, 00:14:58.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.870 "is_configured": false, 00:14:58.870 "data_offset": 0, 00:14:58.870 "data_size": 63488 00:14:58.870 }, 00:14:58.870 { 00:14:58.870 "name": "BaseBdev2", 00:14:58.870 "uuid": "88477d23-b365-5b24-8f6b-08b24cf1efca", 00:14:58.871 "is_configured": true, 00:14:58.871 "data_offset": 2048, 00:14:58.871 "data_size": 63488 00:14:58.871 } 00:14:58.871 ] 00:14:58.871 }' 00:14:58.871 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.871 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.438 16:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:59.438 16:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.438 16:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.438 [2024-12-06 16:30:41.018481] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:59.438 [2024-12-06 16:30:41.018605] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.438 [2024-12-06 16:30:41.018668] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:59.438 [2024-12-06 16:30:41.018709] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.438 [2024-12-06 16:30:41.019245] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.439 [2024-12-06 16:30:41.019312] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:59.439 [2024-12-06 16:30:41.019452] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:59.439 [2024-12-06 16:30:41.019499] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:59.439 [2024-12-06 16:30:41.019557] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:59.439 [2024-12-06 16:30:41.019607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:59.439 [2024-12-06 16:30:41.025072] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:14:59.439 spare 00:14:59.439 16:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.439 16:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:59.439 [2024-12-06 16:30:41.027315] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:00.375 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:00.375 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:00.375 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:00.375 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:00.375 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:00.375 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.375 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.375 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.375 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.375 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.375 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:00.375 "name": "raid_bdev1", 00:15:00.375 "uuid": "745fe1e7-c836-4f54-aab9-3dbf18fc6f29", 00:15:00.375 "strip_size_kb": 0, 00:15:00.375 "state": "online", 00:15:00.375 "raid_level": "raid1", 00:15:00.375 "superblock": true, 00:15:00.375 "num_base_bdevs": 2, 00:15:00.375 "num_base_bdevs_discovered": 2, 00:15:00.375 "num_base_bdevs_operational": 2, 00:15:00.375 "process": { 00:15:00.375 "type": "rebuild", 00:15:00.375 "target": "spare", 00:15:00.375 "progress": { 00:15:00.375 "blocks": 20480, 00:15:00.375 "percent": 32 00:15:00.375 } 00:15:00.375 }, 00:15:00.375 "base_bdevs_list": [ 00:15:00.375 { 00:15:00.375 "name": "spare", 00:15:00.375 "uuid": "eb687980-c6f8-5ca3-86bb-2c6f439362f7", 00:15:00.375 "is_configured": true, 00:15:00.375 "data_offset": 2048, 00:15:00.375 "data_size": 63488 00:15:00.375 }, 00:15:00.375 { 00:15:00.375 "name": "BaseBdev2", 00:15:00.375 "uuid": "88477d23-b365-5b24-8f6b-08b24cf1efca", 00:15:00.375 "is_configured": true, 00:15:00.375 "data_offset": 2048, 00:15:00.375 "data_size": 63488 00:15:00.375 } 00:15:00.375 ] 00:15:00.375 }' 00:15:00.375 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:00.375 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:00.375 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:00.375 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:00.375 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:00.375 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.375 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.375 [2024-12-06 16:30:42.172014] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:00.635 [2024-12-06 16:30:42.232401] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:00.635 [2024-12-06 16:30:42.232488] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:00.635 [2024-12-06 16:30:42.232506] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:00.635 [2024-12-06 16:30:42.232517] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:00.635 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.635 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:00.635 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:00.635 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:00.635 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:00.635 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:00.635 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:00.635 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.635 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.635 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.635 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.635 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.635 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.635 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.635 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.635 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.635 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.635 "name": "raid_bdev1", 00:15:00.635 "uuid": "745fe1e7-c836-4f54-aab9-3dbf18fc6f29", 00:15:00.635 "strip_size_kb": 0, 00:15:00.635 "state": "online", 00:15:00.635 "raid_level": "raid1", 00:15:00.636 "superblock": true, 00:15:00.636 "num_base_bdevs": 2, 00:15:00.636 "num_base_bdevs_discovered": 1, 00:15:00.636 "num_base_bdevs_operational": 1, 00:15:00.636 "base_bdevs_list": [ 00:15:00.636 { 00:15:00.636 "name": null, 00:15:00.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.636 "is_configured": false, 00:15:00.636 "data_offset": 0, 00:15:00.636 "data_size": 63488 00:15:00.636 }, 00:15:00.636 { 00:15:00.636 "name": "BaseBdev2", 00:15:00.636 "uuid": "88477d23-b365-5b24-8f6b-08b24cf1efca", 00:15:00.636 "is_configured": true, 00:15:00.636 "data_offset": 2048, 00:15:00.636 "data_size": 63488 00:15:00.636 } 00:15:00.636 ] 00:15:00.636 }' 00:15:00.636 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.636 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.895 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:00.895 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:00.895 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:00.895 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:00.895 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:00.895 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.895 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.895 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.895 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.895 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.154 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:01.154 "name": "raid_bdev1", 00:15:01.154 "uuid": "745fe1e7-c836-4f54-aab9-3dbf18fc6f29", 00:15:01.154 "strip_size_kb": 0, 00:15:01.154 "state": "online", 00:15:01.154 "raid_level": "raid1", 00:15:01.154 "superblock": true, 00:15:01.154 "num_base_bdevs": 2, 00:15:01.154 "num_base_bdevs_discovered": 1, 00:15:01.154 "num_base_bdevs_operational": 1, 00:15:01.154 "base_bdevs_list": [ 00:15:01.154 { 00:15:01.154 "name": null, 00:15:01.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.154 "is_configured": false, 00:15:01.154 "data_offset": 0, 00:15:01.154 "data_size": 63488 00:15:01.154 }, 00:15:01.154 { 00:15:01.154 "name": "BaseBdev2", 00:15:01.154 "uuid": "88477d23-b365-5b24-8f6b-08b24cf1efca", 00:15:01.154 "is_configured": true, 00:15:01.154 "data_offset": 2048, 00:15:01.154 "data_size": 63488 00:15:01.154 } 00:15:01.154 ] 00:15:01.154 }' 00:15:01.154 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:01.154 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:01.154 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:01.154 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:01.154 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:01.155 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.155 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.155 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.155 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:01.155 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.155 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.155 [2024-12-06 16:30:42.848734] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:01.155 [2024-12-06 16:30:42.848803] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.155 [2024-12-06 16:30:42.848836] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:01.155 [2024-12-06 16:30:42.848848] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.155 [2024-12-06 16:30:42.849309] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.155 [2024-12-06 16:30:42.849333] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:01.155 [2024-12-06 16:30:42.849414] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:01.155 [2024-12-06 16:30:42.849437] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:01.155 [2024-12-06 16:30:42.849448] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:01.155 [2024-12-06 16:30:42.849461] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:01.155 BaseBdev1 00:15:01.155 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.155 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:02.092 16:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:02.092 16:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:02.092 16:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.092 16:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:02.092 16:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:02.092 16:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:02.092 16:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.092 16:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.092 16:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.092 16:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.092 16:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.092 16:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.092 16:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.092 16:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.092 16:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.092 16:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.092 "name": "raid_bdev1", 00:15:02.092 "uuid": "745fe1e7-c836-4f54-aab9-3dbf18fc6f29", 00:15:02.092 "strip_size_kb": 0, 00:15:02.092 "state": "online", 00:15:02.092 "raid_level": "raid1", 00:15:02.092 "superblock": true, 00:15:02.092 "num_base_bdevs": 2, 00:15:02.092 "num_base_bdevs_discovered": 1, 00:15:02.092 "num_base_bdevs_operational": 1, 00:15:02.092 "base_bdevs_list": [ 00:15:02.092 { 00:15:02.092 "name": null, 00:15:02.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.092 "is_configured": false, 00:15:02.092 "data_offset": 0, 00:15:02.092 "data_size": 63488 00:15:02.092 }, 00:15:02.092 { 00:15:02.092 "name": "BaseBdev2", 00:15:02.092 "uuid": "88477d23-b365-5b24-8f6b-08b24cf1efca", 00:15:02.092 "is_configured": true, 00:15:02.092 "data_offset": 2048, 00:15:02.092 "data_size": 63488 00:15:02.092 } 00:15:02.092 ] 00:15:02.092 }' 00:15:02.092 16:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.092 16:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.736 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:02.736 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.736 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:02.736 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:02.736 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.736 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.736 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.736 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.736 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.736 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.736 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.736 "name": "raid_bdev1", 00:15:02.736 "uuid": "745fe1e7-c836-4f54-aab9-3dbf18fc6f29", 00:15:02.736 "strip_size_kb": 0, 00:15:02.736 "state": "online", 00:15:02.736 "raid_level": "raid1", 00:15:02.736 "superblock": true, 00:15:02.736 "num_base_bdevs": 2, 00:15:02.736 "num_base_bdevs_discovered": 1, 00:15:02.736 "num_base_bdevs_operational": 1, 00:15:02.736 "base_bdevs_list": [ 00:15:02.736 { 00:15:02.736 "name": null, 00:15:02.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.736 "is_configured": false, 00:15:02.736 "data_offset": 0, 00:15:02.736 "data_size": 63488 00:15:02.736 }, 00:15:02.736 { 00:15:02.736 "name": "BaseBdev2", 00:15:02.736 "uuid": "88477d23-b365-5b24-8f6b-08b24cf1efca", 00:15:02.736 "is_configured": true, 00:15:02.736 "data_offset": 2048, 00:15:02.736 "data_size": 63488 00:15:02.736 } 00:15:02.736 ] 00:15:02.736 }' 00:15:02.736 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.736 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:02.736 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.736 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:02.736 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:02.736 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:15:02.736 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:02.736 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:02.736 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:02.736 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:02.736 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:02.736 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:02.736 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.736 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.736 [2024-12-06 16:30:44.430519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:02.736 [2024-12-06 16:30:44.430753] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:02.736 [2024-12-06 16:30:44.430773] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:02.736 request: 00:15:02.736 { 00:15:02.736 "base_bdev": "BaseBdev1", 00:15:02.736 "raid_bdev": "raid_bdev1", 00:15:02.736 "method": "bdev_raid_add_base_bdev", 00:15:02.736 "req_id": 1 00:15:02.736 } 00:15:02.736 Got JSON-RPC error response 00:15:02.736 response: 00:15:02.736 { 00:15:02.736 "code": -22, 00:15:02.736 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:02.736 } 00:15:02.736 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:02.736 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:15:02.736 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:02.736 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:02.736 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:02.736 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:03.694 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:03.694 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.694 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:03.694 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:03.694 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:03.694 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:03.694 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.694 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.694 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.694 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.694 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.694 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.694 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.694 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.694 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.694 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.694 "name": "raid_bdev1", 00:15:03.694 "uuid": "745fe1e7-c836-4f54-aab9-3dbf18fc6f29", 00:15:03.694 "strip_size_kb": 0, 00:15:03.694 "state": "online", 00:15:03.694 "raid_level": "raid1", 00:15:03.694 "superblock": true, 00:15:03.694 "num_base_bdevs": 2, 00:15:03.694 "num_base_bdevs_discovered": 1, 00:15:03.694 "num_base_bdevs_operational": 1, 00:15:03.694 "base_bdevs_list": [ 00:15:03.694 { 00:15:03.694 "name": null, 00:15:03.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.694 "is_configured": false, 00:15:03.694 "data_offset": 0, 00:15:03.694 "data_size": 63488 00:15:03.694 }, 00:15:03.694 { 00:15:03.694 "name": "BaseBdev2", 00:15:03.694 "uuid": "88477d23-b365-5b24-8f6b-08b24cf1efca", 00:15:03.694 "is_configured": true, 00:15:03.694 "data_offset": 2048, 00:15:03.694 "data_size": 63488 00:15:03.694 } 00:15:03.694 ] 00:15:03.694 }' 00:15:03.694 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.694 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.263 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:04.263 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.263 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:04.263 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:04.263 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.263 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.263 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.263 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.263 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.263 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.263 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.263 "name": "raid_bdev1", 00:15:04.263 "uuid": "745fe1e7-c836-4f54-aab9-3dbf18fc6f29", 00:15:04.263 "strip_size_kb": 0, 00:15:04.263 "state": "online", 00:15:04.263 "raid_level": "raid1", 00:15:04.263 "superblock": true, 00:15:04.263 "num_base_bdevs": 2, 00:15:04.263 "num_base_bdevs_discovered": 1, 00:15:04.263 "num_base_bdevs_operational": 1, 00:15:04.263 "base_bdevs_list": [ 00:15:04.263 { 00:15:04.263 "name": null, 00:15:04.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.263 "is_configured": false, 00:15:04.263 "data_offset": 0, 00:15:04.263 "data_size": 63488 00:15:04.263 }, 00:15:04.263 { 00:15:04.263 "name": "BaseBdev2", 00:15:04.263 "uuid": "88477d23-b365-5b24-8f6b-08b24cf1efca", 00:15:04.263 "is_configured": true, 00:15:04.263 "data_offset": 2048, 00:15:04.263 "data_size": 63488 00:15:04.263 } 00:15:04.263 ] 00:15:04.263 }' 00:15:04.263 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:04.263 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:04.263 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.263 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:04.263 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 87945 00:15:04.263 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 87945 ']' 00:15:04.263 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 87945 00:15:04.263 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:15:04.263 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:04.263 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87945 00:15:04.263 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:04.263 killing process with pid 87945 00:15:04.263 Received shutdown signal, test time was about 17.878053 seconds 00:15:04.263 00:15:04.263 Latency(us) 00:15:04.263 [2024-12-06T16:30:46.102Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:04.263 [2024-12-06T16:30:46.102Z] =================================================================================================================== 00:15:04.263 [2024-12-06T16:30:46.102Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:04.263 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:04.263 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87945' 00:15:04.263 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 87945 00:15:04.263 [2024-12-06 16:30:46.040287] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:04.263 [2024-12-06 16:30:46.040458] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:04.263 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 87945 00:15:04.263 [2024-12-06 16:30:46.040523] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:04.263 [2024-12-06 16:30:46.040533] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:15:04.263 [2024-12-06 16:30:46.068747] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:04.523 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:04.523 00:15:04.523 real 0m19.858s 00:15:04.523 user 0m26.448s 00:15:04.523 sys 0m2.081s 00:15:04.523 ************************************ 00:15:04.523 END TEST raid_rebuild_test_sb_io 00:15:04.523 ************************************ 00:15:04.523 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:04.523 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.523 16:30:46 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:15:04.523 16:30:46 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:15:04.523 16:30:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:04.523 16:30:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:04.523 16:30:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:04.523 ************************************ 00:15:04.523 START TEST raid_rebuild_test 00:15:04.523 ************************************ 00:15:04.523 16:30:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:15:04.523 16:30:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:04.523 16:30:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:04.523 16:30:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:04.523 16:30:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:04.523 16:30:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:04.523 16:30:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:04.523 16:30:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:04.523 16:30:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:04.523 16:30:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:04.523 16:30:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:04.523 16:30:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:04.523 16:30:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:04.523 16:30:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:04.523 16:30:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:04.523 16:30:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:04.524 16:30:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:04.524 16:30:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:04.524 16:30:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:04.524 16:30:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:04.524 16:30:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:04.524 16:30:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:04.524 16:30:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:04.524 16:30:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:04.524 16:30:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:04.524 16:30:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:04.524 16:30:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:04.524 16:30:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:04.524 16:30:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:04.524 16:30:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:04.524 16:30:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=88636 00:15:04.524 16:30:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 88636 00:15:04.524 16:30:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:04.524 16:30:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 88636 ']' 00:15:04.524 16:30:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:04.524 16:30:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:04.524 16:30:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:04.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:04.524 16:30:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:04.524 16:30:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.783 [2024-12-06 16:30:46.463974] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:15:04.783 [2024-12-06 16:30:46.464306] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88636 ] 00:15:04.783 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:04.783 Zero copy mechanism will not be used. 00:15:05.042 [2024-12-06 16:30:46.656770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.042 [2024-12-06 16:30:46.684922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.043 [2024-12-06 16:30:46.729303] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:05.043 [2024-12-06 16:30:46.729437] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:05.611 16:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:05.611 16:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:15:05.611 16:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:05.611 16:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:05.611 16:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.611 16:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.611 BaseBdev1_malloc 00:15:05.611 16:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.611 16:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:05.611 16:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.611 16:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.611 [2024-12-06 16:30:47.394417] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:05.611 [2024-12-06 16:30:47.394494] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:05.611 [2024-12-06 16:30:47.394526] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:05.611 [2024-12-06 16:30:47.394540] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:05.611 [2024-12-06 16:30:47.396998] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:05.611 [2024-12-06 16:30:47.397039] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:05.611 BaseBdev1 00:15:05.611 16:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.611 16:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:05.611 16:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:05.611 16:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.611 16:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.611 BaseBdev2_malloc 00:15:05.611 16:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.611 16:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:05.611 16:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.611 16:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.611 [2024-12-06 16:30:47.415467] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:05.611 [2024-12-06 16:30:47.415578] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:05.611 [2024-12-06 16:30:47.415606] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:05.611 [2024-12-06 16:30:47.415616] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:05.611 [2024-12-06 16:30:47.418046] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:05.611 [2024-12-06 16:30:47.418085] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:05.611 BaseBdev2 00:15:05.611 16:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.611 16:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:05.611 16:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:05.611 16:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.611 16:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.611 BaseBdev3_malloc 00:15:05.611 16:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.611 16:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:05.611 16:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.611 16:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.611 [2024-12-06 16:30:47.436447] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:05.611 [2024-12-06 16:30:47.436503] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:05.611 [2024-12-06 16:30:47.436527] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:05.611 [2024-12-06 16:30:47.436537] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:05.611 [2024-12-06 16:30:47.438852] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:05.611 [2024-12-06 16:30:47.438934] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:05.611 BaseBdev3 00:15:05.611 16:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.611 16:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:05.611 16:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:05.612 16:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.612 16:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.870 BaseBdev4_malloc 00:15:05.870 16:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.870 16:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:05.870 16:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.870 16:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.870 [2024-12-06 16:30:47.465529] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:05.870 [2024-12-06 16:30:47.465590] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:05.870 [2024-12-06 16:30:47.465619] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:05.870 [2024-12-06 16:30:47.465629] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:05.871 [2024-12-06 16:30:47.468064] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:05.871 [2024-12-06 16:30:47.468153] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:05.871 BaseBdev4 00:15:05.871 16:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.871 16:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:05.871 16:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.871 16:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.871 spare_malloc 00:15:05.871 16:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.871 16:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:05.871 16:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.871 16:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.871 spare_delay 00:15:05.871 16:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.871 16:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:05.871 16:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.871 16:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.871 [2024-12-06 16:30:47.494555] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:05.871 [2024-12-06 16:30:47.494657] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:05.871 [2024-12-06 16:30:47.494683] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:05.871 [2024-12-06 16:30:47.494693] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:05.871 [2024-12-06 16:30:47.497194] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:05.871 [2024-12-06 16:30:47.497247] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:05.871 spare 00:15:05.871 16:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.871 16:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:05.871 16:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.871 16:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.871 [2024-12-06 16:30:47.502597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:05.871 [2024-12-06 16:30:47.504740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:05.871 [2024-12-06 16:30:47.504821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:05.871 [2024-12-06 16:30:47.504871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:05.871 [2024-12-06 16:30:47.504963] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:05.871 [2024-12-06 16:30:47.504974] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:05.871 [2024-12-06 16:30:47.505304] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:05.871 [2024-12-06 16:30:47.505548] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:05.871 [2024-12-06 16:30:47.505570] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:15:05.871 [2024-12-06 16:30:47.505712] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:05.871 16:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.871 16:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:05.871 16:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.871 16:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:05.871 16:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:05.871 16:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:05.871 16:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:05.871 16:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.871 16:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.871 16:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.871 16:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.871 16:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.871 16:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.871 16:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.871 16:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.871 16:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.871 16:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.871 "name": "raid_bdev1", 00:15:05.871 "uuid": "259b806a-e83e-4a5e-bbea-2d1037242b0a", 00:15:05.871 "strip_size_kb": 0, 00:15:05.871 "state": "online", 00:15:05.871 "raid_level": "raid1", 00:15:05.871 "superblock": false, 00:15:05.871 "num_base_bdevs": 4, 00:15:05.871 "num_base_bdevs_discovered": 4, 00:15:05.871 "num_base_bdevs_operational": 4, 00:15:05.871 "base_bdevs_list": [ 00:15:05.871 { 00:15:05.871 "name": "BaseBdev1", 00:15:05.871 "uuid": "dbe5af50-ebc2-51a6-aa22-fdfb3c45a149", 00:15:05.871 "is_configured": true, 00:15:05.871 "data_offset": 0, 00:15:05.871 "data_size": 65536 00:15:05.871 }, 00:15:05.871 { 00:15:05.871 "name": "BaseBdev2", 00:15:05.871 "uuid": "8fed6607-f7ce-55b0-ae7d-df3d24193466", 00:15:05.871 "is_configured": true, 00:15:05.871 "data_offset": 0, 00:15:05.871 "data_size": 65536 00:15:05.871 }, 00:15:05.871 { 00:15:05.871 "name": "BaseBdev3", 00:15:05.871 "uuid": "bb2db011-bcff-5937-99f5-cf687939b8bc", 00:15:05.871 "is_configured": true, 00:15:05.871 "data_offset": 0, 00:15:05.871 "data_size": 65536 00:15:05.871 }, 00:15:05.871 { 00:15:05.871 "name": "BaseBdev4", 00:15:05.871 "uuid": "b957bde4-d9a5-5609-8ada-310bab79d190", 00:15:05.871 "is_configured": true, 00:15:05.871 "data_offset": 0, 00:15:05.871 "data_size": 65536 00:15:05.871 } 00:15:05.871 ] 00:15:05.871 }' 00:15:05.871 16:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.871 16:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.130 16:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:06.130 16:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:06.130 16:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.130 16:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.130 [2024-12-06 16:30:47.918282] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:06.130 16:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.130 16:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:15:06.130 16:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.130 16:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:06.130 16:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.130 16:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.130 16:30:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.388 16:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:06.388 16:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:06.388 16:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:06.388 16:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:06.388 16:30:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:06.388 16:30:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:06.388 16:30:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:06.388 16:30:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:06.388 16:30:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:06.388 16:30:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:06.388 16:30:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:06.388 16:30:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:06.388 16:30:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:06.388 16:30:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:06.388 [2024-12-06 16:30:48.185508] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:06.388 /dev/nbd0 00:15:06.388 16:30:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:06.388 16:30:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:06.388 16:30:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:06.388 16:30:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:06.388 16:30:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:06.388 16:30:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:06.388 16:30:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:06.647 16:30:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:06.647 16:30:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:06.647 16:30:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:06.647 16:30:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:06.647 1+0 records in 00:15:06.647 1+0 records out 00:15:06.647 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360641 s, 11.4 MB/s 00:15:06.647 16:30:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:06.647 16:30:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:06.647 16:30:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:06.647 16:30:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:06.647 16:30:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:06.647 16:30:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:06.647 16:30:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:06.647 16:30:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:06.647 16:30:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:06.647 16:30:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:15:13.345 65536+0 records in 00:15:13.345 65536+0 records out 00:15:13.345 33554432 bytes (34 MB, 32 MiB) copied, 5.90113 s, 5.7 MB/s 00:15:13.345 16:30:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:13.345 16:30:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:13.345 16:30:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:13.345 16:30:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:13.345 16:30:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:13.345 16:30:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:13.345 16:30:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:13.345 [2024-12-06 16:30:54.377743] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:13.345 16:30:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:13.345 16:30:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:13.345 16:30:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:13.345 16:30:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:13.345 16:30:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:13.345 16:30:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:13.345 16:30:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:13.345 16:30:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:13.345 16:30:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:13.345 16:30:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.345 16:30:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.345 [2024-12-06 16:30:54.417779] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:13.345 16:30:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.345 16:30:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:13.345 16:30:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:13.345 16:30:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:13.345 16:30:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:13.345 16:30:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:13.345 16:30:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:13.345 16:30:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.345 16:30:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.345 16:30:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.345 16:30:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.345 16:30:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.345 16:30:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.345 16:30:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.345 16:30:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.345 16:30:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.345 16:30:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.345 "name": "raid_bdev1", 00:15:13.345 "uuid": "259b806a-e83e-4a5e-bbea-2d1037242b0a", 00:15:13.345 "strip_size_kb": 0, 00:15:13.345 "state": "online", 00:15:13.345 "raid_level": "raid1", 00:15:13.345 "superblock": false, 00:15:13.345 "num_base_bdevs": 4, 00:15:13.345 "num_base_bdevs_discovered": 3, 00:15:13.345 "num_base_bdevs_operational": 3, 00:15:13.345 "base_bdevs_list": [ 00:15:13.345 { 00:15:13.345 "name": null, 00:15:13.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.345 "is_configured": false, 00:15:13.345 "data_offset": 0, 00:15:13.345 "data_size": 65536 00:15:13.345 }, 00:15:13.345 { 00:15:13.345 "name": "BaseBdev2", 00:15:13.345 "uuid": "8fed6607-f7ce-55b0-ae7d-df3d24193466", 00:15:13.345 "is_configured": true, 00:15:13.345 "data_offset": 0, 00:15:13.345 "data_size": 65536 00:15:13.345 }, 00:15:13.345 { 00:15:13.345 "name": "BaseBdev3", 00:15:13.345 "uuid": "bb2db011-bcff-5937-99f5-cf687939b8bc", 00:15:13.345 "is_configured": true, 00:15:13.345 "data_offset": 0, 00:15:13.345 "data_size": 65536 00:15:13.345 }, 00:15:13.345 { 00:15:13.345 "name": "BaseBdev4", 00:15:13.345 "uuid": "b957bde4-d9a5-5609-8ada-310bab79d190", 00:15:13.345 "is_configured": true, 00:15:13.345 "data_offset": 0, 00:15:13.345 "data_size": 65536 00:15:13.345 } 00:15:13.345 ] 00:15:13.345 }' 00:15:13.345 16:30:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.346 16:30:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.346 16:30:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:13.346 16:30:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.346 16:30:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.346 [2024-12-06 16:30:54.877087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:13.346 [2024-12-06 16:30:54.881572] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:15:13.346 16:30:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.346 16:30:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:13.346 [2024-12-06 16:30:54.883865] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:14.299 16:30:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:14.299 16:30:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.299 16:30:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:14.299 16:30:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:14.299 16:30:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.299 16:30:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.299 16:30:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.299 16:30:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.299 16:30:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.299 16:30:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.299 16:30:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.299 "name": "raid_bdev1", 00:15:14.299 "uuid": "259b806a-e83e-4a5e-bbea-2d1037242b0a", 00:15:14.299 "strip_size_kb": 0, 00:15:14.299 "state": "online", 00:15:14.299 "raid_level": "raid1", 00:15:14.299 "superblock": false, 00:15:14.299 "num_base_bdevs": 4, 00:15:14.299 "num_base_bdevs_discovered": 4, 00:15:14.299 "num_base_bdevs_operational": 4, 00:15:14.299 "process": { 00:15:14.299 "type": "rebuild", 00:15:14.299 "target": "spare", 00:15:14.299 "progress": { 00:15:14.299 "blocks": 20480, 00:15:14.299 "percent": 31 00:15:14.299 } 00:15:14.299 }, 00:15:14.299 "base_bdevs_list": [ 00:15:14.299 { 00:15:14.299 "name": "spare", 00:15:14.299 "uuid": "a3bdd0e2-e4f3-50c6-9a3b-4c60f71fc900", 00:15:14.299 "is_configured": true, 00:15:14.299 "data_offset": 0, 00:15:14.299 "data_size": 65536 00:15:14.299 }, 00:15:14.299 { 00:15:14.299 "name": "BaseBdev2", 00:15:14.299 "uuid": "8fed6607-f7ce-55b0-ae7d-df3d24193466", 00:15:14.299 "is_configured": true, 00:15:14.299 "data_offset": 0, 00:15:14.299 "data_size": 65536 00:15:14.299 }, 00:15:14.299 { 00:15:14.299 "name": "BaseBdev3", 00:15:14.300 "uuid": "bb2db011-bcff-5937-99f5-cf687939b8bc", 00:15:14.300 "is_configured": true, 00:15:14.300 "data_offset": 0, 00:15:14.300 "data_size": 65536 00:15:14.300 }, 00:15:14.300 { 00:15:14.300 "name": "BaseBdev4", 00:15:14.300 "uuid": "b957bde4-d9a5-5609-8ada-310bab79d190", 00:15:14.300 "is_configured": true, 00:15:14.300 "data_offset": 0, 00:15:14.300 "data_size": 65536 00:15:14.300 } 00:15:14.300 ] 00:15:14.300 }' 00:15:14.300 16:30:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.300 16:30:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:14.300 16:30:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.300 16:30:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:14.300 16:30:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:14.300 16:30:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.300 16:30:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.300 [2024-12-06 16:30:56.016578] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:14.300 [2024-12-06 16:30:56.089748] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:14.300 [2024-12-06 16:30:56.089831] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:14.300 [2024-12-06 16:30:56.089851] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:14.300 [2024-12-06 16:30:56.089858] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:14.300 16:30:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.300 16:30:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:14.300 16:30:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:14.300 16:30:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:14.300 16:30:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:14.300 16:30:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:14.300 16:30:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:14.300 16:30:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.300 16:30:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.300 16:30:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.300 16:30:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.300 16:30:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.300 16:30:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.300 16:30:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.300 16:30:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.300 16:30:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.559 16:30:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.559 "name": "raid_bdev1", 00:15:14.559 "uuid": "259b806a-e83e-4a5e-bbea-2d1037242b0a", 00:15:14.559 "strip_size_kb": 0, 00:15:14.559 "state": "online", 00:15:14.559 "raid_level": "raid1", 00:15:14.559 "superblock": false, 00:15:14.559 "num_base_bdevs": 4, 00:15:14.559 "num_base_bdevs_discovered": 3, 00:15:14.559 "num_base_bdevs_operational": 3, 00:15:14.559 "base_bdevs_list": [ 00:15:14.559 { 00:15:14.559 "name": null, 00:15:14.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.559 "is_configured": false, 00:15:14.559 "data_offset": 0, 00:15:14.559 "data_size": 65536 00:15:14.559 }, 00:15:14.559 { 00:15:14.559 "name": "BaseBdev2", 00:15:14.559 "uuid": "8fed6607-f7ce-55b0-ae7d-df3d24193466", 00:15:14.559 "is_configured": true, 00:15:14.559 "data_offset": 0, 00:15:14.559 "data_size": 65536 00:15:14.559 }, 00:15:14.559 { 00:15:14.559 "name": "BaseBdev3", 00:15:14.559 "uuid": "bb2db011-bcff-5937-99f5-cf687939b8bc", 00:15:14.559 "is_configured": true, 00:15:14.559 "data_offset": 0, 00:15:14.559 "data_size": 65536 00:15:14.559 }, 00:15:14.559 { 00:15:14.559 "name": "BaseBdev4", 00:15:14.559 "uuid": "b957bde4-d9a5-5609-8ada-310bab79d190", 00:15:14.559 "is_configured": true, 00:15:14.559 "data_offset": 0, 00:15:14.559 "data_size": 65536 00:15:14.559 } 00:15:14.559 ] 00:15:14.559 }' 00:15:14.559 16:30:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.559 16:30:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.819 16:30:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:14.819 16:30:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.819 16:30:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:14.819 16:30:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:14.819 16:30:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.819 16:30:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.819 16:30:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.819 16:30:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.819 16:30:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.819 16:30:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.819 16:30:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.819 "name": "raid_bdev1", 00:15:14.819 "uuid": "259b806a-e83e-4a5e-bbea-2d1037242b0a", 00:15:14.819 "strip_size_kb": 0, 00:15:14.819 "state": "online", 00:15:14.819 "raid_level": "raid1", 00:15:14.819 "superblock": false, 00:15:14.819 "num_base_bdevs": 4, 00:15:14.819 "num_base_bdevs_discovered": 3, 00:15:14.819 "num_base_bdevs_operational": 3, 00:15:14.819 "base_bdevs_list": [ 00:15:14.819 { 00:15:14.819 "name": null, 00:15:14.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.819 "is_configured": false, 00:15:14.819 "data_offset": 0, 00:15:14.819 "data_size": 65536 00:15:14.819 }, 00:15:14.819 { 00:15:14.819 "name": "BaseBdev2", 00:15:14.819 "uuid": "8fed6607-f7ce-55b0-ae7d-df3d24193466", 00:15:14.819 "is_configured": true, 00:15:14.819 "data_offset": 0, 00:15:14.819 "data_size": 65536 00:15:14.819 }, 00:15:14.819 { 00:15:14.819 "name": "BaseBdev3", 00:15:14.819 "uuid": "bb2db011-bcff-5937-99f5-cf687939b8bc", 00:15:14.819 "is_configured": true, 00:15:14.819 "data_offset": 0, 00:15:14.819 "data_size": 65536 00:15:14.819 }, 00:15:14.819 { 00:15:14.819 "name": "BaseBdev4", 00:15:14.819 "uuid": "b957bde4-d9a5-5609-8ada-310bab79d190", 00:15:14.819 "is_configured": true, 00:15:14.819 "data_offset": 0, 00:15:14.819 "data_size": 65536 00:15:14.819 } 00:15:14.819 ] 00:15:14.819 }' 00:15:14.820 16:30:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.820 16:30:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:14.820 16:30:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.079 16:30:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:15.079 16:30:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:15.079 16:30:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.079 16:30:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.079 [2024-12-06 16:30:56.673664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:15.079 [2024-12-06 16:30:56.678069] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:15:15.079 16:30:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.079 16:30:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:15.079 [2024-12-06 16:30:56.680360] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:16.018 16:30:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:16.018 16:30:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:16.018 16:30:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:16.018 16:30:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:16.018 16:30:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:16.018 16:30:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.018 16:30:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.018 16:30:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.018 16:30:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.018 16:30:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.018 16:30:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:16.018 "name": "raid_bdev1", 00:15:16.018 "uuid": "259b806a-e83e-4a5e-bbea-2d1037242b0a", 00:15:16.018 "strip_size_kb": 0, 00:15:16.018 "state": "online", 00:15:16.018 "raid_level": "raid1", 00:15:16.018 "superblock": false, 00:15:16.018 "num_base_bdevs": 4, 00:15:16.018 "num_base_bdevs_discovered": 4, 00:15:16.018 "num_base_bdevs_operational": 4, 00:15:16.018 "process": { 00:15:16.018 "type": "rebuild", 00:15:16.018 "target": "spare", 00:15:16.018 "progress": { 00:15:16.018 "blocks": 20480, 00:15:16.018 "percent": 31 00:15:16.018 } 00:15:16.018 }, 00:15:16.018 "base_bdevs_list": [ 00:15:16.018 { 00:15:16.018 "name": "spare", 00:15:16.018 "uuid": "a3bdd0e2-e4f3-50c6-9a3b-4c60f71fc900", 00:15:16.018 "is_configured": true, 00:15:16.018 "data_offset": 0, 00:15:16.018 "data_size": 65536 00:15:16.018 }, 00:15:16.018 { 00:15:16.018 "name": "BaseBdev2", 00:15:16.018 "uuid": "8fed6607-f7ce-55b0-ae7d-df3d24193466", 00:15:16.018 "is_configured": true, 00:15:16.018 "data_offset": 0, 00:15:16.018 "data_size": 65536 00:15:16.018 }, 00:15:16.018 { 00:15:16.018 "name": "BaseBdev3", 00:15:16.018 "uuid": "bb2db011-bcff-5937-99f5-cf687939b8bc", 00:15:16.018 "is_configured": true, 00:15:16.018 "data_offset": 0, 00:15:16.018 "data_size": 65536 00:15:16.018 }, 00:15:16.018 { 00:15:16.018 "name": "BaseBdev4", 00:15:16.018 "uuid": "b957bde4-d9a5-5609-8ada-310bab79d190", 00:15:16.018 "is_configured": true, 00:15:16.018 "data_offset": 0, 00:15:16.018 "data_size": 65536 00:15:16.018 } 00:15:16.018 ] 00:15:16.018 }' 00:15:16.018 16:30:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:16.018 16:30:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:16.018 16:30:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.018 16:30:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:16.018 16:30:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:16.018 16:30:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:16.018 16:30:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:16.018 16:30:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:16.018 16:30:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:16.018 16:30:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.018 16:30:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.018 [2024-12-06 16:30:57.840795] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:16.278 [2024-12-06 16:30:57.885783] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09ca0 00:15:16.278 16:30:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.278 16:30:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:16.278 16:30:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:16.278 16:30:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:16.278 16:30:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:16.278 16:30:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:16.278 16:30:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:16.278 16:30:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:16.278 16:30:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.278 16:30:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.278 16:30:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.278 16:30:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.278 16:30:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.278 16:30:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:16.278 "name": "raid_bdev1", 00:15:16.278 "uuid": "259b806a-e83e-4a5e-bbea-2d1037242b0a", 00:15:16.278 "strip_size_kb": 0, 00:15:16.278 "state": "online", 00:15:16.278 "raid_level": "raid1", 00:15:16.278 "superblock": false, 00:15:16.278 "num_base_bdevs": 4, 00:15:16.278 "num_base_bdevs_discovered": 3, 00:15:16.278 "num_base_bdevs_operational": 3, 00:15:16.278 "process": { 00:15:16.278 "type": "rebuild", 00:15:16.278 "target": "spare", 00:15:16.278 "progress": { 00:15:16.278 "blocks": 24576, 00:15:16.278 "percent": 37 00:15:16.278 } 00:15:16.278 }, 00:15:16.278 "base_bdevs_list": [ 00:15:16.278 { 00:15:16.278 "name": "spare", 00:15:16.278 "uuid": "a3bdd0e2-e4f3-50c6-9a3b-4c60f71fc900", 00:15:16.278 "is_configured": true, 00:15:16.278 "data_offset": 0, 00:15:16.278 "data_size": 65536 00:15:16.278 }, 00:15:16.278 { 00:15:16.278 "name": null, 00:15:16.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.278 "is_configured": false, 00:15:16.278 "data_offset": 0, 00:15:16.278 "data_size": 65536 00:15:16.278 }, 00:15:16.278 { 00:15:16.278 "name": "BaseBdev3", 00:15:16.278 "uuid": "bb2db011-bcff-5937-99f5-cf687939b8bc", 00:15:16.278 "is_configured": true, 00:15:16.278 "data_offset": 0, 00:15:16.278 "data_size": 65536 00:15:16.278 }, 00:15:16.278 { 00:15:16.278 "name": "BaseBdev4", 00:15:16.278 "uuid": "b957bde4-d9a5-5609-8ada-310bab79d190", 00:15:16.278 "is_configured": true, 00:15:16.278 "data_offset": 0, 00:15:16.278 "data_size": 65536 00:15:16.278 } 00:15:16.278 ] 00:15:16.278 }' 00:15:16.278 16:30:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:16.278 16:30:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:16.278 16:30:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.278 16:30:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:16.278 16:30:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=368 00:15:16.278 16:30:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:16.278 16:30:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:16.278 16:30:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:16.278 16:30:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:16.278 16:30:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:16.278 16:30:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:16.278 16:30:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.278 16:30:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.278 16:30:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.279 16:30:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.279 16:30:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.279 16:30:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:16.279 "name": "raid_bdev1", 00:15:16.279 "uuid": "259b806a-e83e-4a5e-bbea-2d1037242b0a", 00:15:16.279 "strip_size_kb": 0, 00:15:16.279 "state": "online", 00:15:16.279 "raid_level": "raid1", 00:15:16.279 "superblock": false, 00:15:16.279 "num_base_bdevs": 4, 00:15:16.279 "num_base_bdevs_discovered": 3, 00:15:16.279 "num_base_bdevs_operational": 3, 00:15:16.279 "process": { 00:15:16.279 "type": "rebuild", 00:15:16.279 "target": "spare", 00:15:16.279 "progress": { 00:15:16.279 "blocks": 26624, 00:15:16.279 "percent": 40 00:15:16.279 } 00:15:16.279 }, 00:15:16.279 "base_bdevs_list": [ 00:15:16.279 { 00:15:16.279 "name": "spare", 00:15:16.279 "uuid": "a3bdd0e2-e4f3-50c6-9a3b-4c60f71fc900", 00:15:16.279 "is_configured": true, 00:15:16.279 "data_offset": 0, 00:15:16.279 "data_size": 65536 00:15:16.279 }, 00:15:16.279 { 00:15:16.279 "name": null, 00:15:16.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.279 "is_configured": false, 00:15:16.279 "data_offset": 0, 00:15:16.279 "data_size": 65536 00:15:16.279 }, 00:15:16.279 { 00:15:16.279 "name": "BaseBdev3", 00:15:16.279 "uuid": "bb2db011-bcff-5937-99f5-cf687939b8bc", 00:15:16.279 "is_configured": true, 00:15:16.279 "data_offset": 0, 00:15:16.279 "data_size": 65536 00:15:16.279 }, 00:15:16.279 { 00:15:16.279 "name": "BaseBdev4", 00:15:16.279 "uuid": "b957bde4-d9a5-5609-8ada-310bab79d190", 00:15:16.279 "is_configured": true, 00:15:16.279 "data_offset": 0, 00:15:16.279 "data_size": 65536 00:15:16.279 } 00:15:16.279 ] 00:15:16.279 }' 00:15:16.279 16:30:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:16.537 16:30:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:16.537 16:30:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.537 16:30:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:16.537 16:30:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:17.470 16:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:17.470 16:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:17.470 16:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.470 16:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:17.470 16:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:17.470 16:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.470 16:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.470 16:30:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.470 16:30:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.470 16:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.470 16:30:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.470 16:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.470 "name": "raid_bdev1", 00:15:17.470 "uuid": "259b806a-e83e-4a5e-bbea-2d1037242b0a", 00:15:17.470 "strip_size_kb": 0, 00:15:17.470 "state": "online", 00:15:17.470 "raid_level": "raid1", 00:15:17.470 "superblock": false, 00:15:17.470 "num_base_bdevs": 4, 00:15:17.470 "num_base_bdevs_discovered": 3, 00:15:17.470 "num_base_bdevs_operational": 3, 00:15:17.470 "process": { 00:15:17.470 "type": "rebuild", 00:15:17.470 "target": "spare", 00:15:17.470 "progress": { 00:15:17.470 "blocks": 51200, 00:15:17.470 "percent": 78 00:15:17.470 } 00:15:17.470 }, 00:15:17.470 "base_bdevs_list": [ 00:15:17.470 { 00:15:17.470 "name": "spare", 00:15:17.470 "uuid": "a3bdd0e2-e4f3-50c6-9a3b-4c60f71fc900", 00:15:17.470 "is_configured": true, 00:15:17.470 "data_offset": 0, 00:15:17.470 "data_size": 65536 00:15:17.470 }, 00:15:17.470 { 00:15:17.470 "name": null, 00:15:17.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.470 "is_configured": false, 00:15:17.470 "data_offset": 0, 00:15:17.470 "data_size": 65536 00:15:17.470 }, 00:15:17.470 { 00:15:17.470 "name": "BaseBdev3", 00:15:17.470 "uuid": "bb2db011-bcff-5937-99f5-cf687939b8bc", 00:15:17.470 "is_configured": true, 00:15:17.470 "data_offset": 0, 00:15:17.470 "data_size": 65536 00:15:17.470 }, 00:15:17.470 { 00:15:17.470 "name": "BaseBdev4", 00:15:17.470 "uuid": "b957bde4-d9a5-5609-8ada-310bab79d190", 00:15:17.470 "is_configured": true, 00:15:17.470 "data_offset": 0, 00:15:17.470 "data_size": 65536 00:15:17.470 } 00:15:17.470 ] 00:15:17.470 }' 00:15:17.470 16:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.470 16:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:17.729 16:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.729 16:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:17.729 16:30:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:18.297 [2024-12-06 16:30:59.895914] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:18.297 [2024-12-06 16:30:59.896022] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:18.297 [2024-12-06 16:30:59.896080] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:18.556 16:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:18.556 16:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:18.556 16:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:18.556 16:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:18.556 16:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:18.556 16:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:18.556 16:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.556 16:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.557 16:31:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.557 16:31:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.557 16:31:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.816 16:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:18.816 "name": "raid_bdev1", 00:15:18.816 "uuid": "259b806a-e83e-4a5e-bbea-2d1037242b0a", 00:15:18.816 "strip_size_kb": 0, 00:15:18.816 "state": "online", 00:15:18.816 "raid_level": "raid1", 00:15:18.816 "superblock": false, 00:15:18.816 "num_base_bdevs": 4, 00:15:18.816 "num_base_bdevs_discovered": 3, 00:15:18.816 "num_base_bdevs_operational": 3, 00:15:18.816 "base_bdevs_list": [ 00:15:18.816 { 00:15:18.816 "name": "spare", 00:15:18.816 "uuid": "a3bdd0e2-e4f3-50c6-9a3b-4c60f71fc900", 00:15:18.816 "is_configured": true, 00:15:18.816 "data_offset": 0, 00:15:18.816 "data_size": 65536 00:15:18.816 }, 00:15:18.816 { 00:15:18.816 "name": null, 00:15:18.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.816 "is_configured": false, 00:15:18.816 "data_offset": 0, 00:15:18.816 "data_size": 65536 00:15:18.816 }, 00:15:18.816 { 00:15:18.816 "name": "BaseBdev3", 00:15:18.816 "uuid": "bb2db011-bcff-5937-99f5-cf687939b8bc", 00:15:18.816 "is_configured": true, 00:15:18.816 "data_offset": 0, 00:15:18.816 "data_size": 65536 00:15:18.816 }, 00:15:18.816 { 00:15:18.816 "name": "BaseBdev4", 00:15:18.816 "uuid": "b957bde4-d9a5-5609-8ada-310bab79d190", 00:15:18.816 "is_configured": true, 00:15:18.816 "data_offset": 0, 00:15:18.816 "data_size": 65536 00:15:18.816 } 00:15:18.816 ] 00:15:18.816 }' 00:15:18.816 16:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:18.816 16:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:18.816 16:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:18.816 16:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:18.816 16:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:18.816 16:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:18.816 16:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:18.816 16:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:18.816 16:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:18.816 16:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:18.816 16:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.816 16:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.816 16:31:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.816 16:31:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.816 16:31:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.816 16:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:18.816 "name": "raid_bdev1", 00:15:18.816 "uuid": "259b806a-e83e-4a5e-bbea-2d1037242b0a", 00:15:18.816 "strip_size_kb": 0, 00:15:18.816 "state": "online", 00:15:18.816 "raid_level": "raid1", 00:15:18.816 "superblock": false, 00:15:18.816 "num_base_bdevs": 4, 00:15:18.816 "num_base_bdevs_discovered": 3, 00:15:18.816 "num_base_bdevs_operational": 3, 00:15:18.816 "base_bdevs_list": [ 00:15:18.816 { 00:15:18.816 "name": "spare", 00:15:18.816 "uuid": "a3bdd0e2-e4f3-50c6-9a3b-4c60f71fc900", 00:15:18.816 "is_configured": true, 00:15:18.816 "data_offset": 0, 00:15:18.816 "data_size": 65536 00:15:18.816 }, 00:15:18.816 { 00:15:18.816 "name": null, 00:15:18.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.816 "is_configured": false, 00:15:18.816 "data_offset": 0, 00:15:18.816 "data_size": 65536 00:15:18.816 }, 00:15:18.816 { 00:15:18.816 "name": "BaseBdev3", 00:15:18.816 "uuid": "bb2db011-bcff-5937-99f5-cf687939b8bc", 00:15:18.816 "is_configured": true, 00:15:18.816 "data_offset": 0, 00:15:18.816 "data_size": 65536 00:15:18.816 }, 00:15:18.816 { 00:15:18.816 "name": "BaseBdev4", 00:15:18.816 "uuid": "b957bde4-d9a5-5609-8ada-310bab79d190", 00:15:18.816 "is_configured": true, 00:15:18.816 "data_offset": 0, 00:15:18.816 "data_size": 65536 00:15:18.816 } 00:15:18.816 ] 00:15:18.816 }' 00:15:18.816 16:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:18.816 16:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:18.816 16:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:18.816 16:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:18.816 16:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:18.816 16:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:18.816 16:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:18.816 16:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:18.816 16:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:18.816 16:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:18.816 16:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.816 16:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.816 16:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.816 16:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.816 16:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.816 16:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.816 16:31:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.816 16:31:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.074 16:31:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.074 16:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.074 "name": "raid_bdev1", 00:15:19.074 "uuid": "259b806a-e83e-4a5e-bbea-2d1037242b0a", 00:15:19.074 "strip_size_kb": 0, 00:15:19.074 "state": "online", 00:15:19.074 "raid_level": "raid1", 00:15:19.074 "superblock": false, 00:15:19.074 "num_base_bdevs": 4, 00:15:19.074 "num_base_bdevs_discovered": 3, 00:15:19.074 "num_base_bdevs_operational": 3, 00:15:19.074 "base_bdevs_list": [ 00:15:19.074 { 00:15:19.074 "name": "spare", 00:15:19.074 "uuid": "a3bdd0e2-e4f3-50c6-9a3b-4c60f71fc900", 00:15:19.074 "is_configured": true, 00:15:19.074 "data_offset": 0, 00:15:19.074 "data_size": 65536 00:15:19.074 }, 00:15:19.074 { 00:15:19.074 "name": null, 00:15:19.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.074 "is_configured": false, 00:15:19.074 "data_offset": 0, 00:15:19.074 "data_size": 65536 00:15:19.074 }, 00:15:19.074 { 00:15:19.074 "name": "BaseBdev3", 00:15:19.074 "uuid": "bb2db011-bcff-5937-99f5-cf687939b8bc", 00:15:19.074 "is_configured": true, 00:15:19.074 "data_offset": 0, 00:15:19.074 "data_size": 65536 00:15:19.074 }, 00:15:19.074 { 00:15:19.074 "name": "BaseBdev4", 00:15:19.074 "uuid": "b957bde4-d9a5-5609-8ada-310bab79d190", 00:15:19.074 "is_configured": true, 00:15:19.074 "data_offset": 0, 00:15:19.074 "data_size": 65536 00:15:19.074 } 00:15:19.074 ] 00:15:19.074 }' 00:15:19.074 16:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.074 16:31:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.331 16:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:19.331 16:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.331 16:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.331 [2024-12-06 16:31:01.075142] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:19.331 [2024-12-06 16:31:01.075251] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:19.331 [2024-12-06 16:31:01.075395] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:19.331 [2024-12-06 16:31:01.075533] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:19.331 [2024-12-06 16:31:01.075605] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:19.331 16:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.331 16:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.331 16:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:19.331 16:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.331 16:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.331 16:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.331 16:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:19.331 16:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:19.331 16:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:19.331 16:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:19.331 16:31:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:19.331 16:31:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:19.331 16:31:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:19.331 16:31:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:19.331 16:31:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:19.331 16:31:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:19.331 16:31:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:19.331 16:31:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:19.331 16:31:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:19.590 /dev/nbd0 00:15:19.590 16:31:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:19.590 16:31:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:19.590 16:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:19.590 16:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:19.590 16:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:19.590 16:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:19.590 16:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:19.590 16:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:19.590 16:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:19.590 16:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:19.590 16:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:19.590 1+0 records in 00:15:19.590 1+0 records out 00:15:19.590 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000588304 s, 7.0 MB/s 00:15:19.849 16:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.849 16:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:19.849 16:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.849 16:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:19.849 16:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:19.849 16:31:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:19.849 16:31:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:19.849 16:31:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:19.849 /dev/nbd1 00:15:20.109 16:31:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:20.109 16:31:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:20.109 16:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:20.109 16:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:20.109 16:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:20.109 16:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:20.109 16:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:20.109 16:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:20.109 16:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:20.109 16:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:20.109 16:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:20.109 1+0 records in 00:15:20.109 1+0 records out 00:15:20.109 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000657707 s, 6.2 MB/s 00:15:20.109 16:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:20.109 16:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:20.109 16:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:20.109 16:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:20.109 16:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:20.109 16:31:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:20.109 16:31:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:20.109 16:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:20.109 16:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:20.109 16:31:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:20.109 16:31:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:20.109 16:31:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:20.109 16:31:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:20.109 16:31:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:20.109 16:31:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:20.369 16:31:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:20.369 16:31:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:20.369 16:31:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:20.369 16:31:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:20.369 16:31:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:20.369 16:31:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:20.369 16:31:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:20.369 16:31:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:20.369 16:31:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:20.369 16:31:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:20.629 16:31:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:20.629 16:31:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:20.629 16:31:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:20.629 16:31:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:20.629 16:31:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:20.629 16:31:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:20.629 16:31:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:20.629 16:31:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:20.629 16:31:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:20.629 16:31:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 88636 00:15:20.629 16:31:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 88636 ']' 00:15:20.630 16:31:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 88636 00:15:20.630 16:31:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:15:20.630 16:31:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:20.630 16:31:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88636 00:15:20.630 killing process with pid 88636 00:15:20.630 Received shutdown signal, test time was about 60.000000 seconds 00:15:20.630 00:15:20.630 Latency(us) 00:15:20.630 [2024-12-06T16:31:02.469Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:20.630 [2024-12-06T16:31:02.469Z] =================================================================================================================== 00:15:20.630 [2024-12-06T16:31:02.469Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:20.630 16:31:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:20.630 16:31:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:20.630 16:31:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88636' 00:15:20.630 16:31:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 88636 00:15:20.630 [2024-12-06 16:31:02.413963] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:20.630 16:31:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 88636 00:15:21.001 [2024-12-06 16:31:02.467003] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:21.001 16:31:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:21.001 00:15:21.001 real 0m16.338s 00:15:21.001 user 0m18.503s 00:15:21.001 sys 0m3.180s 00:15:21.001 ************************************ 00:15:21.001 END TEST raid_rebuild_test 00:15:21.001 ************************************ 00:15:21.001 16:31:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:21.001 16:31:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.001 16:31:02 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:15:21.001 16:31:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:21.001 16:31:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:21.001 16:31:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:21.001 ************************************ 00:15:21.001 START TEST raid_rebuild_test_sb 00:15:21.002 ************************************ 00:15:21.002 16:31:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:15:21.002 16:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:21.002 16:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:21.002 16:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:21.002 16:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:21.002 16:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:21.002 16:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:21.002 16:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:21.002 16:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:21.002 16:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:21.002 16:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:21.002 16:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:21.002 16:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:21.002 16:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:21.002 16:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:21.002 16:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:21.002 16:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:21.002 16:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:21.002 16:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:21.002 16:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:21.002 16:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:21.002 16:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:21.002 16:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:21.002 16:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:21.002 16:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:21.002 16:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:21.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:21.002 16:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:21.002 16:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:21.002 16:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:21.002 16:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:21.002 16:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:21.002 16:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=89071 00:15:21.002 16:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 89071 00:15:21.002 16:31:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 89071 ']' 00:15:21.002 16:31:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:21.002 16:31:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:21.002 16:31:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:21.002 16:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:21.002 16:31:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:21.002 16:31:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.270 [2024-12-06 16:31:02.864072] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:15:21.270 [2024-12-06 16:31:02.864401] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89071 ] 00:15:21.270 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:21.270 Zero copy mechanism will not be used. 00:15:21.270 [2024-12-06 16:31:03.044574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.270 [2024-12-06 16:31:03.073776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.531 [2024-12-06 16:31:03.119109] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:21.531 [2024-12-06 16:31:03.119245] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:22.099 16:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:22.099 16:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:22.099 16:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:22.099 16:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:22.099 16:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.099 16:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.099 BaseBdev1_malloc 00:15:22.099 16:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.099 16:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:22.099 16:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.099 16:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.099 [2024-12-06 16:31:03.756140] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:22.099 [2024-12-06 16:31:03.756333] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:22.099 [2024-12-06 16:31:03.756390] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:22.099 [2024-12-06 16:31:03.756437] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:22.099 [2024-12-06 16:31:03.758813] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:22.099 [2024-12-06 16:31:03.758904] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:22.099 BaseBdev1 00:15:22.099 16:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.099 16:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:22.099 16:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:22.099 16:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.099 16:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.099 BaseBdev2_malloc 00:15:22.099 16:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.099 16:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:22.099 16:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.099 16:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.099 [2024-12-06 16:31:03.785002] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:22.099 [2024-12-06 16:31:03.785154] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:22.099 [2024-12-06 16:31:03.785198] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:22.099 [2024-12-06 16:31:03.785246] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:22.099 [2024-12-06 16:31:03.787469] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:22.099 [2024-12-06 16:31:03.787544] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:22.099 BaseBdev2 00:15:22.099 16:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.099 16:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:22.099 16:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:22.099 16:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.099 16:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.099 BaseBdev3_malloc 00:15:22.099 16:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.099 16:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:22.099 16:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.099 16:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.099 [2024-12-06 16:31:03.813843] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:22.099 [2024-12-06 16:31:03.813979] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:22.099 [2024-12-06 16:31:03.814042] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:22.099 [2024-12-06 16:31:03.814079] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:22.099 [2024-12-06 16:31:03.816551] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:22.099 [2024-12-06 16:31:03.816635] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:22.100 BaseBdev3 00:15:22.100 16:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.100 16:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:22.100 16:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:22.100 16:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.100 16:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.100 BaseBdev4_malloc 00:15:22.100 16:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.100 16:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:22.100 16:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.100 16:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.100 [2024-12-06 16:31:03.853019] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:22.100 [2024-12-06 16:31:03.853092] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:22.100 [2024-12-06 16:31:03.853122] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:22.100 [2024-12-06 16:31:03.853132] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:22.100 [2024-12-06 16:31:03.855564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:22.100 [2024-12-06 16:31:03.855705] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:22.100 BaseBdev4 00:15:22.100 16:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.100 16:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:22.100 16:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.100 16:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.100 spare_malloc 00:15:22.100 16:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.100 16:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:22.100 16:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.100 16:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.100 spare_delay 00:15:22.100 16:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.100 16:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:22.100 16:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.100 16:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.100 [2024-12-06 16:31:03.894324] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:22.100 [2024-12-06 16:31:03.894382] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:22.100 [2024-12-06 16:31:03.894404] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:22.100 [2024-12-06 16:31:03.894414] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:22.100 [2024-12-06 16:31:03.896749] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:22.100 [2024-12-06 16:31:03.896797] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:22.100 spare 00:15:22.100 16:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.100 16:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:22.100 16:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.100 16:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.100 [2024-12-06 16:31:03.906406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:22.100 [2024-12-06 16:31:03.908651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:22.100 [2024-12-06 16:31:03.908747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:22.100 [2024-12-06 16:31:03.908800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:22.100 [2024-12-06 16:31:03.909020] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:22.100 [2024-12-06 16:31:03.909037] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:22.100 [2024-12-06 16:31:03.909372] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:22.100 [2024-12-06 16:31:03.909564] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:22.100 [2024-12-06 16:31:03.909585] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:15:22.100 [2024-12-06 16:31:03.909734] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:22.100 16:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.100 16:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:22.100 16:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:22.100 16:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:22.100 16:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:22.100 16:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:22.100 16:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:22.100 16:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.100 16:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.100 16:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.100 16:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.100 16:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.100 16:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.100 16:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.100 16:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.360 16:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.360 16:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.360 "name": "raid_bdev1", 00:15:22.360 "uuid": "c51a9c31-77b3-474c-acab-fe02d2778e04", 00:15:22.360 "strip_size_kb": 0, 00:15:22.360 "state": "online", 00:15:22.360 "raid_level": "raid1", 00:15:22.360 "superblock": true, 00:15:22.360 "num_base_bdevs": 4, 00:15:22.360 "num_base_bdevs_discovered": 4, 00:15:22.360 "num_base_bdevs_operational": 4, 00:15:22.360 "base_bdevs_list": [ 00:15:22.360 { 00:15:22.360 "name": "BaseBdev1", 00:15:22.360 "uuid": "8d12b2d3-09da-57e0-996c-bdb6cc611b51", 00:15:22.360 "is_configured": true, 00:15:22.360 "data_offset": 2048, 00:15:22.360 "data_size": 63488 00:15:22.360 }, 00:15:22.360 { 00:15:22.360 "name": "BaseBdev2", 00:15:22.360 "uuid": "dc5d5d46-62e2-59d0-bbc2-7a5979044dc4", 00:15:22.360 "is_configured": true, 00:15:22.360 "data_offset": 2048, 00:15:22.360 "data_size": 63488 00:15:22.360 }, 00:15:22.360 { 00:15:22.360 "name": "BaseBdev3", 00:15:22.360 "uuid": "cb1c2bf9-78dc-5d47-a9f2-d61ba22efaaf", 00:15:22.360 "is_configured": true, 00:15:22.360 "data_offset": 2048, 00:15:22.360 "data_size": 63488 00:15:22.360 }, 00:15:22.360 { 00:15:22.360 "name": "BaseBdev4", 00:15:22.360 "uuid": "72e28289-0fac-5d98-84a4-dac5bb2f3a21", 00:15:22.360 "is_configured": true, 00:15:22.360 "data_offset": 2048, 00:15:22.360 "data_size": 63488 00:15:22.360 } 00:15:22.360 ] 00:15:22.360 }' 00:15:22.360 16:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.360 16:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.620 16:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:22.620 16:31:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.620 16:31:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.620 16:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:22.620 [2024-12-06 16:31:04.405981] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:22.620 16:31:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.620 16:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:22.620 16:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.620 16:31:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.620 16:31:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.620 16:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:22.880 16:31:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.880 16:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:22.880 16:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:22.880 16:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:22.880 16:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:22.880 16:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:22.880 16:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:22.880 16:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:22.880 16:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:22.880 16:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:22.880 16:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:22.880 16:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:22.880 16:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:22.880 16:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:22.880 16:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:23.139 [2024-12-06 16:31:04.753114] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:23.139 /dev/nbd0 00:15:23.139 16:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:23.139 16:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:23.139 16:31:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:23.139 16:31:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:23.139 16:31:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:23.139 16:31:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:23.139 16:31:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:23.139 16:31:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:23.139 16:31:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:23.139 16:31:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:23.139 16:31:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:23.139 1+0 records in 00:15:23.139 1+0 records out 00:15:23.139 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000483828 s, 8.5 MB/s 00:15:23.139 16:31:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:23.139 16:31:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:23.139 16:31:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:23.140 16:31:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:23.140 16:31:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:23.140 16:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:23.140 16:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:23.140 16:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:23.140 16:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:23.140 16:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:15:31.296 63488+0 records in 00:15:31.296 63488+0 records out 00:15:31.296 32505856 bytes (33 MB, 31 MiB) copied, 7.05813 s, 4.6 MB/s 00:15:31.296 16:31:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:31.296 16:31:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:31.296 16:31:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:31.296 16:31:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:31.296 16:31:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:31.296 16:31:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:31.297 16:31:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:31.297 16:31:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:31.297 [2024-12-06 16:31:12.158556] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.297 16:31:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:31.297 16:31:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:31.297 16:31:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:31.297 16:31:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:31.297 16:31:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:31.297 16:31:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:31.297 16:31:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:31.297 16:31:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:31.297 16:31:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.297 16:31:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.297 [2024-12-06 16:31:12.174626] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:31.297 16:31:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.297 16:31:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:31.297 16:31:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:31.297 16:31:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:31.297 16:31:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:31.297 16:31:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:31.297 16:31:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:31.297 16:31:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.297 16:31:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.297 16:31:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.297 16:31:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.297 16:31:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.297 16:31:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.297 16:31:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.297 16:31:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.297 16:31:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.297 16:31:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.297 "name": "raid_bdev1", 00:15:31.297 "uuid": "c51a9c31-77b3-474c-acab-fe02d2778e04", 00:15:31.297 "strip_size_kb": 0, 00:15:31.297 "state": "online", 00:15:31.297 "raid_level": "raid1", 00:15:31.297 "superblock": true, 00:15:31.297 "num_base_bdevs": 4, 00:15:31.297 "num_base_bdevs_discovered": 3, 00:15:31.297 "num_base_bdevs_operational": 3, 00:15:31.297 "base_bdevs_list": [ 00:15:31.297 { 00:15:31.297 "name": null, 00:15:31.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.297 "is_configured": false, 00:15:31.297 "data_offset": 0, 00:15:31.297 "data_size": 63488 00:15:31.297 }, 00:15:31.297 { 00:15:31.297 "name": "BaseBdev2", 00:15:31.297 "uuid": "dc5d5d46-62e2-59d0-bbc2-7a5979044dc4", 00:15:31.297 "is_configured": true, 00:15:31.297 "data_offset": 2048, 00:15:31.297 "data_size": 63488 00:15:31.297 }, 00:15:31.297 { 00:15:31.297 "name": "BaseBdev3", 00:15:31.297 "uuid": "cb1c2bf9-78dc-5d47-a9f2-d61ba22efaaf", 00:15:31.297 "is_configured": true, 00:15:31.297 "data_offset": 2048, 00:15:31.297 "data_size": 63488 00:15:31.297 }, 00:15:31.297 { 00:15:31.297 "name": "BaseBdev4", 00:15:31.297 "uuid": "72e28289-0fac-5d98-84a4-dac5bb2f3a21", 00:15:31.297 "is_configured": true, 00:15:31.297 "data_offset": 2048, 00:15:31.297 "data_size": 63488 00:15:31.297 } 00:15:31.297 ] 00:15:31.297 }' 00:15:31.297 16:31:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.297 16:31:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.297 16:31:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:31.297 16:31:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.297 16:31:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.297 [2024-12-06 16:31:12.661931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:31.297 [2024-12-06 16:31:12.666603] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:15:31.297 16:31:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.297 16:31:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:31.297 [2024-12-06 16:31:12.669090] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:31.867 16:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:31.867 16:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:31.867 16:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:31.867 16:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:31.867 16:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:31.867 16:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.867 16:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.867 16:31:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.867 16:31:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.867 16:31:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.127 16:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:32.127 "name": "raid_bdev1", 00:15:32.127 "uuid": "c51a9c31-77b3-474c-acab-fe02d2778e04", 00:15:32.127 "strip_size_kb": 0, 00:15:32.127 "state": "online", 00:15:32.127 "raid_level": "raid1", 00:15:32.127 "superblock": true, 00:15:32.127 "num_base_bdevs": 4, 00:15:32.127 "num_base_bdevs_discovered": 4, 00:15:32.127 "num_base_bdevs_operational": 4, 00:15:32.127 "process": { 00:15:32.127 "type": "rebuild", 00:15:32.127 "target": "spare", 00:15:32.127 "progress": { 00:15:32.127 "blocks": 20480, 00:15:32.127 "percent": 32 00:15:32.127 } 00:15:32.127 }, 00:15:32.127 "base_bdevs_list": [ 00:15:32.127 { 00:15:32.127 "name": "spare", 00:15:32.127 "uuid": "730e98ab-457c-52eb-b362-09e9fa9ad474", 00:15:32.127 "is_configured": true, 00:15:32.127 "data_offset": 2048, 00:15:32.127 "data_size": 63488 00:15:32.127 }, 00:15:32.127 { 00:15:32.127 "name": "BaseBdev2", 00:15:32.127 "uuid": "dc5d5d46-62e2-59d0-bbc2-7a5979044dc4", 00:15:32.127 "is_configured": true, 00:15:32.127 "data_offset": 2048, 00:15:32.127 "data_size": 63488 00:15:32.127 }, 00:15:32.127 { 00:15:32.127 "name": "BaseBdev3", 00:15:32.127 "uuid": "cb1c2bf9-78dc-5d47-a9f2-d61ba22efaaf", 00:15:32.127 "is_configured": true, 00:15:32.127 "data_offset": 2048, 00:15:32.127 "data_size": 63488 00:15:32.127 }, 00:15:32.127 { 00:15:32.127 "name": "BaseBdev4", 00:15:32.127 "uuid": "72e28289-0fac-5d98-84a4-dac5bb2f3a21", 00:15:32.127 "is_configured": true, 00:15:32.127 "data_offset": 2048, 00:15:32.127 "data_size": 63488 00:15:32.127 } 00:15:32.127 ] 00:15:32.127 }' 00:15:32.127 16:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:32.127 16:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:32.127 16:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:32.127 16:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:32.127 16:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:32.127 16:31:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.127 16:31:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.127 [2024-12-06 16:31:13.841134] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:32.127 [2024-12-06 16:31:13.875632] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:32.127 [2024-12-06 16:31:13.875746] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:32.127 [2024-12-06 16:31:13.875773] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:32.127 [2024-12-06 16:31:13.875784] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:32.127 16:31:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.127 16:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:32.127 16:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:32.127 16:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:32.127 16:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:32.127 16:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:32.127 16:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:32.127 16:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.127 16:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.127 16:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.127 16:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.127 16:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.127 16:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.127 16:31:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.127 16:31:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.127 16:31:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.127 16:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.127 "name": "raid_bdev1", 00:15:32.127 "uuid": "c51a9c31-77b3-474c-acab-fe02d2778e04", 00:15:32.127 "strip_size_kb": 0, 00:15:32.127 "state": "online", 00:15:32.127 "raid_level": "raid1", 00:15:32.127 "superblock": true, 00:15:32.127 "num_base_bdevs": 4, 00:15:32.127 "num_base_bdevs_discovered": 3, 00:15:32.127 "num_base_bdevs_operational": 3, 00:15:32.127 "base_bdevs_list": [ 00:15:32.127 { 00:15:32.127 "name": null, 00:15:32.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.127 "is_configured": false, 00:15:32.127 "data_offset": 0, 00:15:32.127 "data_size": 63488 00:15:32.127 }, 00:15:32.127 { 00:15:32.127 "name": "BaseBdev2", 00:15:32.127 "uuid": "dc5d5d46-62e2-59d0-bbc2-7a5979044dc4", 00:15:32.127 "is_configured": true, 00:15:32.127 "data_offset": 2048, 00:15:32.127 "data_size": 63488 00:15:32.127 }, 00:15:32.127 { 00:15:32.127 "name": "BaseBdev3", 00:15:32.127 "uuid": "cb1c2bf9-78dc-5d47-a9f2-d61ba22efaaf", 00:15:32.127 "is_configured": true, 00:15:32.127 "data_offset": 2048, 00:15:32.127 "data_size": 63488 00:15:32.127 }, 00:15:32.127 { 00:15:32.127 "name": "BaseBdev4", 00:15:32.127 "uuid": "72e28289-0fac-5d98-84a4-dac5bb2f3a21", 00:15:32.127 "is_configured": true, 00:15:32.127 "data_offset": 2048, 00:15:32.127 "data_size": 63488 00:15:32.127 } 00:15:32.127 ] 00:15:32.127 }' 00:15:32.127 16:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.127 16:31:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.696 16:31:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:32.696 16:31:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:32.696 16:31:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:32.696 16:31:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:32.696 16:31:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:32.696 16:31:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.696 16:31:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.696 16:31:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.696 16:31:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.696 16:31:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.696 16:31:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:32.696 "name": "raid_bdev1", 00:15:32.696 "uuid": "c51a9c31-77b3-474c-acab-fe02d2778e04", 00:15:32.696 "strip_size_kb": 0, 00:15:32.696 "state": "online", 00:15:32.696 "raid_level": "raid1", 00:15:32.696 "superblock": true, 00:15:32.696 "num_base_bdevs": 4, 00:15:32.696 "num_base_bdevs_discovered": 3, 00:15:32.696 "num_base_bdevs_operational": 3, 00:15:32.696 "base_bdevs_list": [ 00:15:32.696 { 00:15:32.696 "name": null, 00:15:32.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.696 "is_configured": false, 00:15:32.696 "data_offset": 0, 00:15:32.696 "data_size": 63488 00:15:32.696 }, 00:15:32.696 { 00:15:32.696 "name": "BaseBdev2", 00:15:32.696 "uuid": "dc5d5d46-62e2-59d0-bbc2-7a5979044dc4", 00:15:32.696 "is_configured": true, 00:15:32.696 "data_offset": 2048, 00:15:32.696 "data_size": 63488 00:15:32.696 }, 00:15:32.696 { 00:15:32.696 "name": "BaseBdev3", 00:15:32.696 "uuid": "cb1c2bf9-78dc-5d47-a9f2-d61ba22efaaf", 00:15:32.696 "is_configured": true, 00:15:32.696 "data_offset": 2048, 00:15:32.696 "data_size": 63488 00:15:32.696 }, 00:15:32.696 { 00:15:32.696 "name": "BaseBdev4", 00:15:32.696 "uuid": "72e28289-0fac-5d98-84a4-dac5bb2f3a21", 00:15:32.696 "is_configured": true, 00:15:32.696 "data_offset": 2048, 00:15:32.696 "data_size": 63488 00:15:32.696 } 00:15:32.696 ] 00:15:32.696 }' 00:15:32.696 16:31:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:32.696 16:31:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:32.696 16:31:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:32.956 16:31:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:32.956 16:31:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:32.956 16:31:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.956 16:31:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.956 [2024-12-06 16:31:14.587975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:32.956 [2024-12-06 16:31:14.592441] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:15:32.956 16:31:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.956 16:31:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:32.956 [2024-12-06 16:31:14.594761] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:33.892 16:31:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:33.892 16:31:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:33.892 16:31:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:33.892 16:31:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:33.892 16:31:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:33.892 16:31:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.892 16:31:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.892 16:31:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.892 16:31:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.892 16:31:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.892 16:31:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:33.892 "name": "raid_bdev1", 00:15:33.892 "uuid": "c51a9c31-77b3-474c-acab-fe02d2778e04", 00:15:33.892 "strip_size_kb": 0, 00:15:33.892 "state": "online", 00:15:33.892 "raid_level": "raid1", 00:15:33.892 "superblock": true, 00:15:33.892 "num_base_bdevs": 4, 00:15:33.892 "num_base_bdevs_discovered": 4, 00:15:33.892 "num_base_bdevs_operational": 4, 00:15:33.892 "process": { 00:15:33.892 "type": "rebuild", 00:15:33.892 "target": "spare", 00:15:33.892 "progress": { 00:15:33.892 "blocks": 20480, 00:15:33.892 "percent": 32 00:15:33.892 } 00:15:33.892 }, 00:15:33.892 "base_bdevs_list": [ 00:15:33.892 { 00:15:33.892 "name": "spare", 00:15:33.892 "uuid": "730e98ab-457c-52eb-b362-09e9fa9ad474", 00:15:33.892 "is_configured": true, 00:15:33.892 "data_offset": 2048, 00:15:33.892 "data_size": 63488 00:15:33.892 }, 00:15:33.892 { 00:15:33.892 "name": "BaseBdev2", 00:15:33.892 "uuid": "dc5d5d46-62e2-59d0-bbc2-7a5979044dc4", 00:15:33.892 "is_configured": true, 00:15:33.892 "data_offset": 2048, 00:15:33.892 "data_size": 63488 00:15:33.892 }, 00:15:33.892 { 00:15:33.892 "name": "BaseBdev3", 00:15:33.892 "uuid": "cb1c2bf9-78dc-5d47-a9f2-d61ba22efaaf", 00:15:33.892 "is_configured": true, 00:15:33.892 "data_offset": 2048, 00:15:33.892 "data_size": 63488 00:15:33.892 }, 00:15:33.892 { 00:15:33.892 "name": "BaseBdev4", 00:15:33.892 "uuid": "72e28289-0fac-5d98-84a4-dac5bb2f3a21", 00:15:33.892 "is_configured": true, 00:15:33.892 "data_offset": 2048, 00:15:33.892 "data_size": 63488 00:15:33.892 } 00:15:33.892 ] 00:15:33.892 }' 00:15:33.892 16:31:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:33.892 16:31:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:33.892 16:31:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:34.152 16:31:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:34.152 16:31:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:34.152 16:31:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:34.152 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:34.152 16:31:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:34.152 16:31:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:34.152 16:31:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:34.152 16:31:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:34.152 16:31:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.152 16:31:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.152 [2024-12-06 16:31:15.763913] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:34.152 [2024-12-06 16:31:15.900280] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca3430 00:15:34.152 16:31:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.152 16:31:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:34.152 16:31:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:34.152 16:31:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:34.152 16:31:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:34.152 16:31:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:34.152 16:31:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:34.152 16:31:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:34.152 16:31:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.152 16:31:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.152 16:31:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.152 16:31:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.152 16:31:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.152 16:31:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:34.152 "name": "raid_bdev1", 00:15:34.152 "uuid": "c51a9c31-77b3-474c-acab-fe02d2778e04", 00:15:34.152 "strip_size_kb": 0, 00:15:34.152 "state": "online", 00:15:34.152 "raid_level": "raid1", 00:15:34.152 "superblock": true, 00:15:34.152 "num_base_bdevs": 4, 00:15:34.152 "num_base_bdevs_discovered": 3, 00:15:34.152 "num_base_bdevs_operational": 3, 00:15:34.152 "process": { 00:15:34.152 "type": "rebuild", 00:15:34.152 "target": "spare", 00:15:34.152 "progress": { 00:15:34.152 "blocks": 24576, 00:15:34.152 "percent": 38 00:15:34.152 } 00:15:34.152 }, 00:15:34.152 "base_bdevs_list": [ 00:15:34.152 { 00:15:34.152 "name": "spare", 00:15:34.152 "uuid": "730e98ab-457c-52eb-b362-09e9fa9ad474", 00:15:34.152 "is_configured": true, 00:15:34.152 "data_offset": 2048, 00:15:34.152 "data_size": 63488 00:15:34.152 }, 00:15:34.152 { 00:15:34.152 "name": null, 00:15:34.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.152 "is_configured": false, 00:15:34.152 "data_offset": 0, 00:15:34.152 "data_size": 63488 00:15:34.152 }, 00:15:34.152 { 00:15:34.152 "name": "BaseBdev3", 00:15:34.152 "uuid": "cb1c2bf9-78dc-5d47-a9f2-d61ba22efaaf", 00:15:34.152 "is_configured": true, 00:15:34.152 "data_offset": 2048, 00:15:34.152 "data_size": 63488 00:15:34.152 }, 00:15:34.152 { 00:15:34.152 "name": "BaseBdev4", 00:15:34.152 "uuid": "72e28289-0fac-5d98-84a4-dac5bb2f3a21", 00:15:34.152 "is_configured": true, 00:15:34.152 "data_offset": 2048, 00:15:34.152 "data_size": 63488 00:15:34.152 } 00:15:34.152 ] 00:15:34.152 }' 00:15:34.152 16:31:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:34.412 16:31:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:34.412 16:31:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:34.412 16:31:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:34.412 16:31:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=386 00:15:34.412 16:31:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:34.412 16:31:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:34.412 16:31:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:34.412 16:31:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:34.412 16:31:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:34.412 16:31:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:34.413 16:31:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.413 16:31:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.413 16:31:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.413 16:31:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.413 16:31:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.413 16:31:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:34.413 "name": "raid_bdev1", 00:15:34.413 "uuid": "c51a9c31-77b3-474c-acab-fe02d2778e04", 00:15:34.413 "strip_size_kb": 0, 00:15:34.413 "state": "online", 00:15:34.413 "raid_level": "raid1", 00:15:34.413 "superblock": true, 00:15:34.413 "num_base_bdevs": 4, 00:15:34.413 "num_base_bdevs_discovered": 3, 00:15:34.413 "num_base_bdevs_operational": 3, 00:15:34.413 "process": { 00:15:34.413 "type": "rebuild", 00:15:34.413 "target": "spare", 00:15:34.413 "progress": { 00:15:34.413 "blocks": 26624, 00:15:34.413 "percent": 41 00:15:34.413 } 00:15:34.413 }, 00:15:34.413 "base_bdevs_list": [ 00:15:34.413 { 00:15:34.413 "name": "spare", 00:15:34.413 "uuid": "730e98ab-457c-52eb-b362-09e9fa9ad474", 00:15:34.413 "is_configured": true, 00:15:34.413 "data_offset": 2048, 00:15:34.413 "data_size": 63488 00:15:34.413 }, 00:15:34.413 { 00:15:34.413 "name": null, 00:15:34.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.413 "is_configured": false, 00:15:34.413 "data_offset": 0, 00:15:34.413 "data_size": 63488 00:15:34.413 }, 00:15:34.413 { 00:15:34.413 "name": "BaseBdev3", 00:15:34.413 "uuid": "cb1c2bf9-78dc-5d47-a9f2-d61ba22efaaf", 00:15:34.413 "is_configured": true, 00:15:34.413 "data_offset": 2048, 00:15:34.413 "data_size": 63488 00:15:34.413 }, 00:15:34.413 { 00:15:34.413 "name": "BaseBdev4", 00:15:34.413 "uuid": "72e28289-0fac-5d98-84a4-dac5bb2f3a21", 00:15:34.413 "is_configured": true, 00:15:34.413 "data_offset": 2048, 00:15:34.413 "data_size": 63488 00:15:34.413 } 00:15:34.413 ] 00:15:34.413 }' 00:15:34.413 16:31:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:34.413 16:31:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:34.413 16:31:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:34.413 16:31:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:34.413 16:31:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:35.793 16:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:35.793 16:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:35.793 16:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:35.793 16:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:35.793 16:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:35.793 16:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:35.793 16:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.793 16:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.793 16:31:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.793 16:31:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.793 16:31:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.793 16:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:35.793 "name": "raid_bdev1", 00:15:35.793 "uuid": "c51a9c31-77b3-474c-acab-fe02d2778e04", 00:15:35.793 "strip_size_kb": 0, 00:15:35.793 "state": "online", 00:15:35.793 "raid_level": "raid1", 00:15:35.793 "superblock": true, 00:15:35.793 "num_base_bdevs": 4, 00:15:35.793 "num_base_bdevs_discovered": 3, 00:15:35.793 "num_base_bdevs_operational": 3, 00:15:35.793 "process": { 00:15:35.793 "type": "rebuild", 00:15:35.793 "target": "spare", 00:15:35.793 "progress": { 00:15:35.793 "blocks": 51200, 00:15:35.793 "percent": 80 00:15:35.793 } 00:15:35.793 }, 00:15:35.793 "base_bdevs_list": [ 00:15:35.793 { 00:15:35.793 "name": "spare", 00:15:35.793 "uuid": "730e98ab-457c-52eb-b362-09e9fa9ad474", 00:15:35.793 "is_configured": true, 00:15:35.793 "data_offset": 2048, 00:15:35.793 "data_size": 63488 00:15:35.793 }, 00:15:35.793 { 00:15:35.793 "name": null, 00:15:35.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.793 "is_configured": false, 00:15:35.793 "data_offset": 0, 00:15:35.793 "data_size": 63488 00:15:35.793 }, 00:15:35.793 { 00:15:35.793 "name": "BaseBdev3", 00:15:35.793 "uuid": "cb1c2bf9-78dc-5d47-a9f2-d61ba22efaaf", 00:15:35.793 "is_configured": true, 00:15:35.793 "data_offset": 2048, 00:15:35.793 "data_size": 63488 00:15:35.793 }, 00:15:35.793 { 00:15:35.793 "name": "BaseBdev4", 00:15:35.793 "uuid": "72e28289-0fac-5d98-84a4-dac5bb2f3a21", 00:15:35.793 "is_configured": true, 00:15:35.793 "data_offset": 2048, 00:15:35.793 "data_size": 63488 00:15:35.793 } 00:15:35.793 ] 00:15:35.793 }' 00:15:35.793 16:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:35.793 16:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:35.793 16:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:35.793 16:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:35.794 16:31:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:36.053 [2024-12-06 16:31:17.809489] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:36.053 [2024-12-06 16:31:17.809682] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:36.053 [2024-12-06 16:31:17.809811] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:36.624 16:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:36.624 16:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:36.624 16:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.624 16:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:36.624 16:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:36.624 16:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.624 16:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.624 16:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.624 16:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.624 16:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.624 16:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.624 16:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.624 "name": "raid_bdev1", 00:15:36.624 "uuid": "c51a9c31-77b3-474c-acab-fe02d2778e04", 00:15:36.624 "strip_size_kb": 0, 00:15:36.624 "state": "online", 00:15:36.624 "raid_level": "raid1", 00:15:36.624 "superblock": true, 00:15:36.624 "num_base_bdevs": 4, 00:15:36.624 "num_base_bdevs_discovered": 3, 00:15:36.624 "num_base_bdevs_operational": 3, 00:15:36.624 "base_bdevs_list": [ 00:15:36.624 { 00:15:36.624 "name": "spare", 00:15:36.624 "uuid": "730e98ab-457c-52eb-b362-09e9fa9ad474", 00:15:36.624 "is_configured": true, 00:15:36.624 "data_offset": 2048, 00:15:36.624 "data_size": 63488 00:15:36.624 }, 00:15:36.624 { 00:15:36.624 "name": null, 00:15:36.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.624 "is_configured": false, 00:15:36.624 "data_offset": 0, 00:15:36.624 "data_size": 63488 00:15:36.624 }, 00:15:36.624 { 00:15:36.624 "name": "BaseBdev3", 00:15:36.624 "uuid": "cb1c2bf9-78dc-5d47-a9f2-d61ba22efaaf", 00:15:36.624 "is_configured": true, 00:15:36.624 "data_offset": 2048, 00:15:36.624 "data_size": 63488 00:15:36.624 }, 00:15:36.624 { 00:15:36.624 "name": "BaseBdev4", 00:15:36.624 "uuid": "72e28289-0fac-5d98-84a4-dac5bb2f3a21", 00:15:36.624 "is_configured": true, 00:15:36.624 "data_offset": 2048, 00:15:36.624 "data_size": 63488 00:15:36.624 } 00:15:36.624 ] 00:15:36.624 }' 00:15:36.624 16:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:36.885 16:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:36.885 16:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:36.885 16:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:36.885 16:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:36.885 16:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:36.885 16:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.885 16:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:36.885 16:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:36.885 16:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.885 16:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.885 16:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.885 16:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.885 16:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.885 16:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.885 16:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.885 "name": "raid_bdev1", 00:15:36.885 "uuid": "c51a9c31-77b3-474c-acab-fe02d2778e04", 00:15:36.885 "strip_size_kb": 0, 00:15:36.885 "state": "online", 00:15:36.885 "raid_level": "raid1", 00:15:36.885 "superblock": true, 00:15:36.885 "num_base_bdevs": 4, 00:15:36.885 "num_base_bdevs_discovered": 3, 00:15:36.885 "num_base_bdevs_operational": 3, 00:15:36.885 "base_bdevs_list": [ 00:15:36.885 { 00:15:36.885 "name": "spare", 00:15:36.885 "uuid": "730e98ab-457c-52eb-b362-09e9fa9ad474", 00:15:36.885 "is_configured": true, 00:15:36.885 "data_offset": 2048, 00:15:36.885 "data_size": 63488 00:15:36.885 }, 00:15:36.885 { 00:15:36.885 "name": null, 00:15:36.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.885 "is_configured": false, 00:15:36.885 "data_offset": 0, 00:15:36.885 "data_size": 63488 00:15:36.885 }, 00:15:36.885 { 00:15:36.885 "name": "BaseBdev3", 00:15:36.885 "uuid": "cb1c2bf9-78dc-5d47-a9f2-d61ba22efaaf", 00:15:36.885 "is_configured": true, 00:15:36.885 "data_offset": 2048, 00:15:36.885 "data_size": 63488 00:15:36.885 }, 00:15:36.885 { 00:15:36.885 "name": "BaseBdev4", 00:15:36.885 "uuid": "72e28289-0fac-5d98-84a4-dac5bb2f3a21", 00:15:36.885 "is_configured": true, 00:15:36.885 "data_offset": 2048, 00:15:36.885 "data_size": 63488 00:15:36.885 } 00:15:36.885 ] 00:15:36.885 }' 00:15:36.885 16:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:36.885 16:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:36.885 16:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:36.885 16:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:36.885 16:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:36.885 16:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:36.885 16:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.885 16:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:36.885 16:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:36.885 16:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:36.885 16:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.885 16:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.885 16:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.885 16:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.885 16:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.885 16:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.885 16:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.885 16:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.885 16:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.885 16:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.885 "name": "raid_bdev1", 00:15:36.885 "uuid": "c51a9c31-77b3-474c-acab-fe02d2778e04", 00:15:36.885 "strip_size_kb": 0, 00:15:36.885 "state": "online", 00:15:36.885 "raid_level": "raid1", 00:15:36.885 "superblock": true, 00:15:36.885 "num_base_bdevs": 4, 00:15:36.885 "num_base_bdevs_discovered": 3, 00:15:36.885 "num_base_bdevs_operational": 3, 00:15:36.885 "base_bdevs_list": [ 00:15:36.885 { 00:15:36.885 "name": "spare", 00:15:36.885 "uuid": "730e98ab-457c-52eb-b362-09e9fa9ad474", 00:15:36.885 "is_configured": true, 00:15:36.885 "data_offset": 2048, 00:15:36.885 "data_size": 63488 00:15:36.885 }, 00:15:36.885 { 00:15:36.885 "name": null, 00:15:36.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.885 "is_configured": false, 00:15:36.885 "data_offset": 0, 00:15:36.885 "data_size": 63488 00:15:36.885 }, 00:15:36.885 { 00:15:36.885 "name": "BaseBdev3", 00:15:36.885 "uuid": "cb1c2bf9-78dc-5d47-a9f2-d61ba22efaaf", 00:15:36.885 "is_configured": true, 00:15:36.885 "data_offset": 2048, 00:15:36.885 "data_size": 63488 00:15:36.885 }, 00:15:36.885 { 00:15:36.885 "name": "BaseBdev4", 00:15:36.885 "uuid": "72e28289-0fac-5d98-84a4-dac5bb2f3a21", 00:15:36.885 "is_configured": true, 00:15:36.885 "data_offset": 2048, 00:15:36.885 "data_size": 63488 00:15:36.885 } 00:15:36.885 ] 00:15:36.885 }' 00:15:36.885 16:31:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.885 16:31:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.487 16:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:37.487 16:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.487 16:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.487 [2024-12-06 16:31:19.124106] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:37.487 [2024-12-06 16:31:19.124161] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:37.487 [2024-12-06 16:31:19.124284] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:37.487 [2024-12-06 16:31:19.124376] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:37.487 [2024-12-06 16:31:19.124391] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:37.487 16:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.487 16:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:37.487 16:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.487 16:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.487 16:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.487 16:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.487 16:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:37.487 16:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:37.487 16:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:37.487 16:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:37.487 16:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:37.487 16:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:37.487 16:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:37.487 16:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:37.487 16:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:37.487 16:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:37.487 16:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:37.487 16:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:37.487 16:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:37.753 /dev/nbd0 00:15:37.753 16:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:37.753 16:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:37.753 16:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:37.753 16:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:37.753 16:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:37.753 16:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:37.753 16:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:37.753 16:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:37.753 16:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:37.753 16:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:37.753 16:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:37.753 1+0 records in 00:15:37.753 1+0 records out 00:15:37.753 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000391404 s, 10.5 MB/s 00:15:37.753 16:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.753 16:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:37.753 16:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.753 16:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:37.753 16:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:37.753 16:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:37.753 16:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:37.753 16:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:38.014 /dev/nbd1 00:15:38.014 16:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:38.014 16:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:38.014 16:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:38.014 16:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:38.014 16:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:38.014 16:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:38.014 16:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:38.014 16:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:38.014 16:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:38.014 16:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:38.014 16:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:38.014 1+0 records in 00:15:38.014 1+0 records out 00:15:38.014 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000452213 s, 9.1 MB/s 00:15:38.014 16:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:38.014 16:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:38.014 16:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:38.014 16:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:38.014 16:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:38.014 16:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:38.014 16:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:38.014 16:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:38.274 16:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:38.274 16:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:38.274 16:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:38.274 16:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:38.274 16:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:38.274 16:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:38.274 16:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:38.533 16:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:38.533 16:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:38.533 16:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:38.533 16:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:38.533 16:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:38.533 16:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:38.534 16:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:38.534 16:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:38.534 16:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:38.534 16:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:38.793 16:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:38.793 16:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:38.793 16:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:38.793 16:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:38.793 16:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:38.793 16:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:38.793 16:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:38.793 16:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:38.793 16:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:38.793 16:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:38.793 16:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.793 16:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.793 16:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.793 16:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:38.793 16:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.793 16:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.793 [2024-12-06 16:31:20.446397] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:38.793 [2024-12-06 16:31:20.446503] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.793 [2024-12-06 16:31:20.446540] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:38.793 [2024-12-06 16:31:20.446559] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.793 [2024-12-06 16:31:20.449262] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.793 [2024-12-06 16:31:20.449325] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:38.794 [2024-12-06 16:31:20.449428] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:38.794 [2024-12-06 16:31:20.449468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:38.794 [2024-12-06 16:31:20.449601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:38.794 [2024-12-06 16:31:20.449757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:38.794 spare 00:15:38.794 16:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.794 16:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:38.794 16:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.794 16:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.794 [2024-12-06 16:31:20.549669] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:15:38.794 [2024-12-06 16:31:20.549869] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:38.794 [2024-12-06 16:31:20.550354] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:15:38.794 [2024-12-06 16:31:20.550602] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:15:38.794 [2024-12-06 16:31:20.550616] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:15:38.794 [2024-12-06 16:31:20.550826] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:38.794 16:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.794 16:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:38.794 16:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:38.794 16:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:38.794 16:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:38.794 16:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:38.794 16:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:38.794 16:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.794 16:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.794 16:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.794 16:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.794 16:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.794 16:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.794 16:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.794 16:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.794 16:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.794 16:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.794 "name": "raid_bdev1", 00:15:38.794 "uuid": "c51a9c31-77b3-474c-acab-fe02d2778e04", 00:15:38.794 "strip_size_kb": 0, 00:15:38.794 "state": "online", 00:15:38.794 "raid_level": "raid1", 00:15:38.794 "superblock": true, 00:15:38.794 "num_base_bdevs": 4, 00:15:38.794 "num_base_bdevs_discovered": 3, 00:15:38.794 "num_base_bdevs_operational": 3, 00:15:38.794 "base_bdevs_list": [ 00:15:38.794 { 00:15:38.794 "name": "spare", 00:15:38.794 "uuid": "730e98ab-457c-52eb-b362-09e9fa9ad474", 00:15:38.794 "is_configured": true, 00:15:38.794 "data_offset": 2048, 00:15:38.794 "data_size": 63488 00:15:38.794 }, 00:15:38.794 { 00:15:38.794 "name": null, 00:15:38.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.794 "is_configured": false, 00:15:38.794 "data_offset": 2048, 00:15:38.794 "data_size": 63488 00:15:38.794 }, 00:15:38.794 { 00:15:38.794 "name": "BaseBdev3", 00:15:38.794 "uuid": "cb1c2bf9-78dc-5d47-a9f2-d61ba22efaaf", 00:15:38.794 "is_configured": true, 00:15:38.794 "data_offset": 2048, 00:15:38.794 "data_size": 63488 00:15:38.794 }, 00:15:38.794 { 00:15:38.794 "name": "BaseBdev4", 00:15:38.794 "uuid": "72e28289-0fac-5d98-84a4-dac5bb2f3a21", 00:15:38.794 "is_configured": true, 00:15:38.794 "data_offset": 2048, 00:15:38.794 "data_size": 63488 00:15:38.794 } 00:15:38.794 ] 00:15:38.794 }' 00:15:38.794 16:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.794 16:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.363 16:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:39.363 16:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.363 16:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:39.363 16:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:39.363 16:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.363 16:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.363 16:31:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.363 16:31:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.363 16:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.363 16:31:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.363 16:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:39.363 "name": "raid_bdev1", 00:15:39.363 "uuid": "c51a9c31-77b3-474c-acab-fe02d2778e04", 00:15:39.363 "strip_size_kb": 0, 00:15:39.363 "state": "online", 00:15:39.363 "raid_level": "raid1", 00:15:39.363 "superblock": true, 00:15:39.363 "num_base_bdevs": 4, 00:15:39.363 "num_base_bdevs_discovered": 3, 00:15:39.363 "num_base_bdevs_operational": 3, 00:15:39.363 "base_bdevs_list": [ 00:15:39.363 { 00:15:39.363 "name": "spare", 00:15:39.363 "uuid": "730e98ab-457c-52eb-b362-09e9fa9ad474", 00:15:39.363 "is_configured": true, 00:15:39.363 "data_offset": 2048, 00:15:39.363 "data_size": 63488 00:15:39.363 }, 00:15:39.363 { 00:15:39.363 "name": null, 00:15:39.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.363 "is_configured": false, 00:15:39.363 "data_offset": 2048, 00:15:39.363 "data_size": 63488 00:15:39.363 }, 00:15:39.363 { 00:15:39.363 "name": "BaseBdev3", 00:15:39.363 "uuid": "cb1c2bf9-78dc-5d47-a9f2-d61ba22efaaf", 00:15:39.363 "is_configured": true, 00:15:39.363 "data_offset": 2048, 00:15:39.363 "data_size": 63488 00:15:39.363 }, 00:15:39.363 { 00:15:39.363 "name": "BaseBdev4", 00:15:39.363 "uuid": "72e28289-0fac-5d98-84a4-dac5bb2f3a21", 00:15:39.363 "is_configured": true, 00:15:39.363 "data_offset": 2048, 00:15:39.363 "data_size": 63488 00:15:39.363 } 00:15:39.363 ] 00:15:39.363 }' 00:15:39.363 16:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:39.363 16:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:39.363 16:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:39.363 16:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:39.363 16:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.363 16:31:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.363 16:31:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.363 16:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:39.363 16:31:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.623 16:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:39.623 16:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:39.623 16:31:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.623 16:31:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.623 [2024-12-06 16:31:21.233729] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:39.623 16:31:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.623 16:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:39.623 16:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:39.623 16:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:39.623 16:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:39.623 16:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:39.623 16:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:39.623 16:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.623 16:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.623 16:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.623 16:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.623 16:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.623 16:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.623 16:31:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.623 16:31:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.623 16:31:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.623 16:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.623 "name": "raid_bdev1", 00:15:39.623 "uuid": "c51a9c31-77b3-474c-acab-fe02d2778e04", 00:15:39.623 "strip_size_kb": 0, 00:15:39.623 "state": "online", 00:15:39.623 "raid_level": "raid1", 00:15:39.623 "superblock": true, 00:15:39.623 "num_base_bdevs": 4, 00:15:39.623 "num_base_bdevs_discovered": 2, 00:15:39.623 "num_base_bdevs_operational": 2, 00:15:39.623 "base_bdevs_list": [ 00:15:39.623 { 00:15:39.623 "name": null, 00:15:39.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.623 "is_configured": false, 00:15:39.623 "data_offset": 0, 00:15:39.623 "data_size": 63488 00:15:39.623 }, 00:15:39.623 { 00:15:39.623 "name": null, 00:15:39.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.623 "is_configured": false, 00:15:39.623 "data_offset": 2048, 00:15:39.623 "data_size": 63488 00:15:39.623 }, 00:15:39.623 { 00:15:39.623 "name": "BaseBdev3", 00:15:39.623 "uuid": "cb1c2bf9-78dc-5d47-a9f2-d61ba22efaaf", 00:15:39.623 "is_configured": true, 00:15:39.623 "data_offset": 2048, 00:15:39.623 "data_size": 63488 00:15:39.623 }, 00:15:39.623 { 00:15:39.623 "name": "BaseBdev4", 00:15:39.623 "uuid": "72e28289-0fac-5d98-84a4-dac5bb2f3a21", 00:15:39.623 "is_configured": true, 00:15:39.623 "data_offset": 2048, 00:15:39.623 "data_size": 63488 00:15:39.623 } 00:15:39.623 ] 00:15:39.623 }' 00:15:39.623 16:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.623 16:31:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.191 16:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:40.191 16:31:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.191 16:31:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.191 [2024-12-06 16:31:21.748958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:40.191 [2024-12-06 16:31:21.749343] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:40.191 [2024-12-06 16:31:21.749441] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:40.191 [2024-12-06 16:31:21.749532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:40.191 [2024-12-06 16:31:21.753837] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:15:40.191 16:31:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.191 16:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:40.191 [2024-12-06 16:31:21.756280] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:41.125 16:31:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:41.125 16:31:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:41.125 16:31:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:41.125 16:31:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:41.125 16:31:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:41.125 16:31:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.125 16:31:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.125 16:31:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.125 16:31:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.125 16:31:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.125 16:31:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:41.125 "name": "raid_bdev1", 00:15:41.125 "uuid": "c51a9c31-77b3-474c-acab-fe02d2778e04", 00:15:41.125 "strip_size_kb": 0, 00:15:41.125 "state": "online", 00:15:41.125 "raid_level": "raid1", 00:15:41.125 "superblock": true, 00:15:41.125 "num_base_bdevs": 4, 00:15:41.125 "num_base_bdevs_discovered": 3, 00:15:41.125 "num_base_bdevs_operational": 3, 00:15:41.125 "process": { 00:15:41.125 "type": "rebuild", 00:15:41.125 "target": "spare", 00:15:41.125 "progress": { 00:15:41.125 "blocks": 20480, 00:15:41.125 "percent": 32 00:15:41.125 } 00:15:41.125 }, 00:15:41.125 "base_bdevs_list": [ 00:15:41.125 { 00:15:41.125 "name": "spare", 00:15:41.125 "uuid": "730e98ab-457c-52eb-b362-09e9fa9ad474", 00:15:41.125 "is_configured": true, 00:15:41.125 "data_offset": 2048, 00:15:41.125 "data_size": 63488 00:15:41.125 }, 00:15:41.125 { 00:15:41.125 "name": null, 00:15:41.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.125 "is_configured": false, 00:15:41.125 "data_offset": 2048, 00:15:41.125 "data_size": 63488 00:15:41.125 }, 00:15:41.125 { 00:15:41.125 "name": "BaseBdev3", 00:15:41.125 "uuid": "cb1c2bf9-78dc-5d47-a9f2-d61ba22efaaf", 00:15:41.125 "is_configured": true, 00:15:41.125 "data_offset": 2048, 00:15:41.125 "data_size": 63488 00:15:41.125 }, 00:15:41.125 { 00:15:41.125 "name": "BaseBdev4", 00:15:41.125 "uuid": "72e28289-0fac-5d98-84a4-dac5bb2f3a21", 00:15:41.125 "is_configured": true, 00:15:41.125 "data_offset": 2048, 00:15:41.125 "data_size": 63488 00:15:41.125 } 00:15:41.125 ] 00:15:41.125 }' 00:15:41.125 16:31:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:41.125 16:31:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:41.125 16:31:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:41.125 16:31:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:41.125 16:31:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:41.125 16:31:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.125 16:31:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.125 [2024-12-06 16:31:22.904492] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:41.384 [2024-12-06 16:31:22.961945] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:41.384 [2024-12-06 16:31:22.962152] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:41.384 [2024-12-06 16:31:22.962175] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:41.384 [2024-12-06 16:31:22.962187] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:41.384 16:31:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.384 16:31:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:41.384 16:31:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:41.384 16:31:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.384 16:31:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:41.384 16:31:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:41.384 16:31:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:41.384 16:31:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.384 16:31:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.384 16:31:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.384 16:31:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.384 16:31:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.384 16:31:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.384 16:31:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.384 16:31:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.384 16:31:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.384 16:31:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.384 "name": "raid_bdev1", 00:15:41.384 "uuid": "c51a9c31-77b3-474c-acab-fe02d2778e04", 00:15:41.384 "strip_size_kb": 0, 00:15:41.384 "state": "online", 00:15:41.384 "raid_level": "raid1", 00:15:41.384 "superblock": true, 00:15:41.384 "num_base_bdevs": 4, 00:15:41.384 "num_base_bdevs_discovered": 2, 00:15:41.384 "num_base_bdevs_operational": 2, 00:15:41.384 "base_bdevs_list": [ 00:15:41.384 { 00:15:41.384 "name": null, 00:15:41.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.384 "is_configured": false, 00:15:41.384 "data_offset": 0, 00:15:41.384 "data_size": 63488 00:15:41.384 }, 00:15:41.384 { 00:15:41.384 "name": null, 00:15:41.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.384 "is_configured": false, 00:15:41.384 "data_offset": 2048, 00:15:41.384 "data_size": 63488 00:15:41.384 }, 00:15:41.384 { 00:15:41.384 "name": "BaseBdev3", 00:15:41.384 "uuid": "cb1c2bf9-78dc-5d47-a9f2-d61ba22efaaf", 00:15:41.384 "is_configured": true, 00:15:41.384 "data_offset": 2048, 00:15:41.384 "data_size": 63488 00:15:41.384 }, 00:15:41.384 { 00:15:41.384 "name": "BaseBdev4", 00:15:41.384 "uuid": "72e28289-0fac-5d98-84a4-dac5bb2f3a21", 00:15:41.384 "is_configured": true, 00:15:41.384 "data_offset": 2048, 00:15:41.384 "data_size": 63488 00:15:41.384 } 00:15:41.384 ] 00:15:41.384 }' 00:15:41.384 16:31:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.384 16:31:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.643 16:31:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:41.643 16:31:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.643 16:31:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.643 [2024-12-06 16:31:23.466073] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:41.643 [2024-12-06 16:31:23.466286] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:41.643 [2024-12-06 16:31:23.466325] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:15:41.643 [2024-12-06 16:31:23.466340] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:41.643 [2024-12-06 16:31:23.466879] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:41.643 [2024-12-06 16:31:23.466912] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:41.643 [2024-12-06 16:31:23.467055] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:41.643 [2024-12-06 16:31:23.467077] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:41.643 [2024-12-06 16:31:23.467101] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:41.643 [2024-12-06 16:31:23.467136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:41.643 [2024-12-06 16:31:23.471442] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:15:41.643 spare 00:15:41.643 16:31:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.643 16:31:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:41.643 [2024-12-06 16:31:23.473768] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:43.017 16:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:43.017 16:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.017 16:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:43.017 16:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:43.017 16:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.017 16:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.017 16:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.017 16:31:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.017 16:31:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.017 16:31:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.017 16:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.017 "name": "raid_bdev1", 00:15:43.017 "uuid": "c51a9c31-77b3-474c-acab-fe02d2778e04", 00:15:43.017 "strip_size_kb": 0, 00:15:43.017 "state": "online", 00:15:43.017 "raid_level": "raid1", 00:15:43.017 "superblock": true, 00:15:43.017 "num_base_bdevs": 4, 00:15:43.017 "num_base_bdevs_discovered": 3, 00:15:43.017 "num_base_bdevs_operational": 3, 00:15:43.017 "process": { 00:15:43.017 "type": "rebuild", 00:15:43.017 "target": "spare", 00:15:43.017 "progress": { 00:15:43.017 "blocks": 20480, 00:15:43.017 "percent": 32 00:15:43.017 } 00:15:43.017 }, 00:15:43.017 "base_bdevs_list": [ 00:15:43.017 { 00:15:43.017 "name": "spare", 00:15:43.017 "uuid": "730e98ab-457c-52eb-b362-09e9fa9ad474", 00:15:43.017 "is_configured": true, 00:15:43.017 "data_offset": 2048, 00:15:43.017 "data_size": 63488 00:15:43.017 }, 00:15:43.017 { 00:15:43.017 "name": null, 00:15:43.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.017 "is_configured": false, 00:15:43.017 "data_offset": 2048, 00:15:43.017 "data_size": 63488 00:15:43.017 }, 00:15:43.017 { 00:15:43.017 "name": "BaseBdev3", 00:15:43.018 "uuid": "cb1c2bf9-78dc-5d47-a9f2-d61ba22efaaf", 00:15:43.018 "is_configured": true, 00:15:43.018 "data_offset": 2048, 00:15:43.018 "data_size": 63488 00:15:43.018 }, 00:15:43.018 { 00:15:43.018 "name": "BaseBdev4", 00:15:43.018 "uuid": "72e28289-0fac-5d98-84a4-dac5bb2f3a21", 00:15:43.018 "is_configured": true, 00:15:43.018 "data_offset": 2048, 00:15:43.018 "data_size": 63488 00:15:43.018 } 00:15:43.018 ] 00:15:43.018 }' 00:15:43.018 16:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.018 16:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:43.018 16:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.018 16:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:43.018 16:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:43.018 16:31:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.018 16:31:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.018 [2024-12-06 16:31:24.669831] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:43.018 [2024-12-06 16:31:24.679406] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:43.018 [2024-12-06 16:31:24.679486] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:43.018 [2024-12-06 16:31:24.679508] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:43.018 [2024-12-06 16:31:24.679517] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:43.018 16:31:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.018 16:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:43.018 16:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:43.018 16:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:43.018 16:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:43.018 16:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:43.018 16:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:43.018 16:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.018 16:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.018 16:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.018 16:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.018 16:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.018 16:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.018 16:31:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.018 16:31:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.018 16:31:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.018 16:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.018 "name": "raid_bdev1", 00:15:43.018 "uuid": "c51a9c31-77b3-474c-acab-fe02d2778e04", 00:15:43.018 "strip_size_kb": 0, 00:15:43.018 "state": "online", 00:15:43.018 "raid_level": "raid1", 00:15:43.018 "superblock": true, 00:15:43.018 "num_base_bdevs": 4, 00:15:43.018 "num_base_bdevs_discovered": 2, 00:15:43.018 "num_base_bdevs_operational": 2, 00:15:43.018 "base_bdevs_list": [ 00:15:43.018 { 00:15:43.018 "name": null, 00:15:43.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.018 "is_configured": false, 00:15:43.018 "data_offset": 0, 00:15:43.018 "data_size": 63488 00:15:43.018 }, 00:15:43.018 { 00:15:43.018 "name": null, 00:15:43.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.018 "is_configured": false, 00:15:43.018 "data_offset": 2048, 00:15:43.018 "data_size": 63488 00:15:43.018 }, 00:15:43.018 { 00:15:43.018 "name": "BaseBdev3", 00:15:43.018 "uuid": "cb1c2bf9-78dc-5d47-a9f2-d61ba22efaaf", 00:15:43.018 "is_configured": true, 00:15:43.018 "data_offset": 2048, 00:15:43.018 "data_size": 63488 00:15:43.018 }, 00:15:43.018 { 00:15:43.018 "name": "BaseBdev4", 00:15:43.018 "uuid": "72e28289-0fac-5d98-84a4-dac5bb2f3a21", 00:15:43.018 "is_configured": true, 00:15:43.018 "data_offset": 2048, 00:15:43.018 "data_size": 63488 00:15:43.018 } 00:15:43.018 ] 00:15:43.018 }' 00:15:43.018 16:31:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.018 16:31:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.586 16:31:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:43.586 16:31:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.586 16:31:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:43.586 16:31:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:43.586 16:31:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.586 16:31:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.586 16:31:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.586 16:31:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.586 16:31:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.586 16:31:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.586 16:31:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.586 "name": "raid_bdev1", 00:15:43.586 "uuid": "c51a9c31-77b3-474c-acab-fe02d2778e04", 00:15:43.586 "strip_size_kb": 0, 00:15:43.586 "state": "online", 00:15:43.586 "raid_level": "raid1", 00:15:43.586 "superblock": true, 00:15:43.586 "num_base_bdevs": 4, 00:15:43.586 "num_base_bdevs_discovered": 2, 00:15:43.586 "num_base_bdevs_operational": 2, 00:15:43.586 "base_bdevs_list": [ 00:15:43.586 { 00:15:43.586 "name": null, 00:15:43.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.586 "is_configured": false, 00:15:43.586 "data_offset": 0, 00:15:43.586 "data_size": 63488 00:15:43.586 }, 00:15:43.586 { 00:15:43.586 "name": null, 00:15:43.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.586 "is_configured": false, 00:15:43.586 "data_offset": 2048, 00:15:43.586 "data_size": 63488 00:15:43.586 }, 00:15:43.586 { 00:15:43.586 "name": "BaseBdev3", 00:15:43.587 "uuid": "cb1c2bf9-78dc-5d47-a9f2-d61ba22efaaf", 00:15:43.587 "is_configured": true, 00:15:43.587 "data_offset": 2048, 00:15:43.587 "data_size": 63488 00:15:43.587 }, 00:15:43.587 { 00:15:43.587 "name": "BaseBdev4", 00:15:43.587 "uuid": "72e28289-0fac-5d98-84a4-dac5bb2f3a21", 00:15:43.587 "is_configured": true, 00:15:43.587 "data_offset": 2048, 00:15:43.587 "data_size": 63488 00:15:43.587 } 00:15:43.587 ] 00:15:43.587 }' 00:15:43.587 16:31:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.587 16:31:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:43.587 16:31:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.587 16:31:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:43.587 16:31:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:43.587 16:31:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.587 16:31:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.587 16:31:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.587 16:31:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:43.587 16:31:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.587 16:31:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.587 [2024-12-06 16:31:25.307553] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:43.587 [2024-12-06 16:31:25.307746] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.587 [2024-12-06 16:31:25.307783] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:15:43.587 [2024-12-06 16:31:25.307794] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.587 [2024-12-06 16:31:25.308347] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.587 [2024-12-06 16:31:25.308371] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:43.587 [2024-12-06 16:31:25.308464] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:43.587 [2024-12-06 16:31:25.308537] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:43.587 [2024-12-06 16:31:25.308553] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:43.587 [2024-12-06 16:31:25.308566] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:43.587 BaseBdev1 00:15:43.587 16:31:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.587 16:31:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:44.523 16:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:44.523 16:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:44.523 16:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:44.523 16:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:44.523 16:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:44.523 16:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:44.523 16:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.523 16:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.523 16:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.523 16:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.523 16:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.523 16:31:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.523 16:31:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.523 16:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.523 16:31:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.782 16:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.782 "name": "raid_bdev1", 00:15:44.782 "uuid": "c51a9c31-77b3-474c-acab-fe02d2778e04", 00:15:44.782 "strip_size_kb": 0, 00:15:44.782 "state": "online", 00:15:44.782 "raid_level": "raid1", 00:15:44.782 "superblock": true, 00:15:44.782 "num_base_bdevs": 4, 00:15:44.782 "num_base_bdevs_discovered": 2, 00:15:44.782 "num_base_bdevs_operational": 2, 00:15:44.782 "base_bdevs_list": [ 00:15:44.782 { 00:15:44.782 "name": null, 00:15:44.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.782 "is_configured": false, 00:15:44.782 "data_offset": 0, 00:15:44.782 "data_size": 63488 00:15:44.782 }, 00:15:44.782 { 00:15:44.782 "name": null, 00:15:44.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.782 "is_configured": false, 00:15:44.782 "data_offset": 2048, 00:15:44.782 "data_size": 63488 00:15:44.782 }, 00:15:44.782 { 00:15:44.782 "name": "BaseBdev3", 00:15:44.782 "uuid": "cb1c2bf9-78dc-5d47-a9f2-d61ba22efaaf", 00:15:44.782 "is_configured": true, 00:15:44.782 "data_offset": 2048, 00:15:44.782 "data_size": 63488 00:15:44.782 }, 00:15:44.782 { 00:15:44.782 "name": "BaseBdev4", 00:15:44.782 "uuid": "72e28289-0fac-5d98-84a4-dac5bb2f3a21", 00:15:44.782 "is_configured": true, 00:15:44.782 "data_offset": 2048, 00:15:44.782 "data_size": 63488 00:15:44.782 } 00:15:44.782 ] 00:15:44.782 }' 00:15:44.782 16:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.782 16:31:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.041 16:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:45.041 16:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.041 16:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:45.041 16:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:45.041 16:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.041 16:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.041 16:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.041 16:31:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.041 16:31:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.041 16:31:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.041 16:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.041 "name": "raid_bdev1", 00:15:45.041 "uuid": "c51a9c31-77b3-474c-acab-fe02d2778e04", 00:15:45.041 "strip_size_kb": 0, 00:15:45.041 "state": "online", 00:15:45.041 "raid_level": "raid1", 00:15:45.041 "superblock": true, 00:15:45.041 "num_base_bdevs": 4, 00:15:45.041 "num_base_bdevs_discovered": 2, 00:15:45.041 "num_base_bdevs_operational": 2, 00:15:45.041 "base_bdevs_list": [ 00:15:45.041 { 00:15:45.041 "name": null, 00:15:45.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.041 "is_configured": false, 00:15:45.041 "data_offset": 0, 00:15:45.041 "data_size": 63488 00:15:45.041 }, 00:15:45.041 { 00:15:45.041 "name": null, 00:15:45.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.041 "is_configured": false, 00:15:45.041 "data_offset": 2048, 00:15:45.041 "data_size": 63488 00:15:45.041 }, 00:15:45.041 { 00:15:45.041 "name": "BaseBdev3", 00:15:45.041 "uuid": "cb1c2bf9-78dc-5d47-a9f2-d61ba22efaaf", 00:15:45.041 "is_configured": true, 00:15:45.041 "data_offset": 2048, 00:15:45.041 "data_size": 63488 00:15:45.041 }, 00:15:45.041 { 00:15:45.041 "name": "BaseBdev4", 00:15:45.041 "uuid": "72e28289-0fac-5d98-84a4-dac5bb2f3a21", 00:15:45.041 "is_configured": true, 00:15:45.041 "data_offset": 2048, 00:15:45.041 "data_size": 63488 00:15:45.041 } 00:15:45.041 ] 00:15:45.041 }' 00:15:45.041 16:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.316 16:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:45.316 16:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.316 16:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:45.316 16:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:45.316 16:31:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:15:45.316 16:31:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:45.316 16:31:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:45.316 16:31:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:45.316 16:31:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:45.316 16:31:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:45.316 16:31:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:45.316 16:31:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.316 16:31:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.316 [2024-12-06 16:31:26.948911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:45.317 [2024-12-06 16:31:26.949136] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:45.317 [2024-12-06 16:31:26.949156] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:45.317 request: 00:15:45.317 { 00:15:45.317 "base_bdev": "BaseBdev1", 00:15:45.317 "raid_bdev": "raid_bdev1", 00:15:45.317 "method": "bdev_raid_add_base_bdev", 00:15:45.317 "req_id": 1 00:15:45.317 } 00:15:45.317 Got JSON-RPC error response 00:15:45.317 response: 00:15:45.317 { 00:15:45.317 "code": -22, 00:15:45.317 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:45.317 } 00:15:45.317 16:31:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:45.317 16:31:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:15:45.317 16:31:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:45.317 16:31:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:45.317 16:31:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:45.317 16:31:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:46.254 16:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:46.254 16:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:46.254 16:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:46.254 16:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:46.254 16:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:46.254 16:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:46.254 16:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.254 16:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.254 16:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.254 16:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.254 16:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.254 16:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.254 16:31:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.254 16:31:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.254 16:31:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.254 16:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.254 "name": "raid_bdev1", 00:15:46.254 "uuid": "c51a9c31-77b3-474c-acab-fe02d2778e04", 00:15:46.254 "strip_size_kb": 0, 00:15:46.254 "state": "online", 00:15:46.254 "raid_level": "raid1", 00:15:46.254 "superblock": true, 00:15:46.254 "num_base_bdevs": 4, 00:15:46.254 "num_base_bdevs_discovered": 2, 00:15:46.254 "num_base_bdevs_operational": 2, 00:15:46.254 "base_bdevs_list": [ 00:15:46.254 { 00:15:46.254 "name": null, 00:15:46.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.254 "is_configured": false, 00:15:46.254 "data_offset": 0, 00:15:46.254 "data_size": 63488 00:15:46.254 }, 00:15:46.254 { 00:15:46.254 "name": null, 00:15:46.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.254 "is_configured": false, 00:15:46.254 "data_offset": 2048, 00:15:46.254 "data_size": 63488 00:15:46.254 }, 00:15:46.254 { 00:15:46.254 "name": "BaseBdev3", 00:15:46.254 "uuid": "cb1c2bf9-78dc-5d47-a9f2-d61ba22efaaf", 00:15:46.254 "is_configured": true, 00:15:46.254 "data_offset": 2048, 00:15:46.254 "data_size": 63488 00:15:46.254 }, 00:15:46.254 { 00:15:46.254 "name": "BaseBdev4", 00:15:46.254 "uuid": "72e28289-0fac-5d98-84a4-dac5bb2f3a21", 00:15:46.254 "is_configured": true, 00:15:46.254 "data_offset": 2048, 00:15:46.254 "data_size": 63488 00:15:46.254 } 00:15:46.254 ] 00:15:46.254 }' 00:15:46.254 16:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.254 16:31:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.821 16:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:46.821 16:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:46.821 16:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:46.821 16:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:46.821 16:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:46.821 16:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.821 16:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.821 16:31:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.821 16:31:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.821 16:31:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.821 16:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:46.821 "name": "raid_bdev1", 00:15:46.821 "uuid": "c51a9c31-77b3-474c-acab-fe02d2778e04", 00:15:46.821 "strip_size_kb": 0, 00:15:46.821 "state": "online", 00:15:46.821 "raid_level": "raid1", 00:15:46.821 "superblock": true, 00:15:46.821 "num_base_bdevs": 4, 00:15:46.821 "num_base_bdevs_discovered": 2, 00:15:46.821 "num_base_bdevs_operational": 2, 00:15:46.821 "base_bdevs_list": [ 00:15:46.821 { 00:15:46.822 "name": null, 00:15:46.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.822 "is_configured": false, 00:15:46.822 "data_offset": 0, 00:15:46.822 "data_size": 63488 00:15:46.822 }, 00:15:46.822 { 00:15:46.822 "name": null, 00:15:46.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.822 "is_configured": false, 00:15:46.822 "data_offset": 2048, 00:15:46.822 "data_size": 63488 00:15:46.822 }, 00:15:46.822 { 00:15:46.822 "name": "BaseBdev3", 00:15:46.822 "uuid": "cb1c2bf9-78dc-5d47-a9f2-d61ba22efaaf", 00:15:46.822 "is_configured": true, 00:15:46.822 "data_offset": 2048, 00:15:46.822 "data_size": 63488 00:15:46.822 }, 00:15:46.822 { 00:15:46.822 "name": "BaseBdev4", 00:15:46.822 "uuid": "72e28289-0fac-5d98-84a4-dac5bb2f3a21", 00:15:46.822 "is_configured": true, 00:15:46.822 "data_offset": 2048, 00:15:46.822 "data_size": 63488 00:15:46.822 } 00:15:46.822 ] 00:15:46.822 }' 00:15:46.822 16:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.822 16:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:46.822 16:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:46.822 16:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:46.822 16:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 89071 00:15:46.822 16:31:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 89071 ']' 00:15:46.822 16:31:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 89071 00:15:46.822 16:31:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:46.822 16:31:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:46.822 16:31:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89071 00:15:46.822 16:31:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:46.822 killing process with pid 89071 00:15:46.822 Received shutdown signal, test time was about 60.000000 seconds 00:15:46.822 00:15:46.822 Latency(us) 00:15:46.822 [2024-12-06T16:31:28.661Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.822 [2024-12-06T16:31:28.661Z] =================================================================================================================== 00:15:46.822 [2024-12-06T16:31:28.661Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:46.822 16:31:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:46.822 16:31:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89071' 00:15:46.822 16:31:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 89071 00:15:46.822 16:31:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 89071 00:15:46.822 [2024-12-06 16:31:28.615777] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:46.822 [2024-12-06 16:31:28.615941] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:46.822 [2024-12-06 16:31:28.616018] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:46.822 [2024-12-06 16:31:28.616032] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:15:47.081 [2024-12-06 16:31:28.671304] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:47.081 16:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:47.081 ************************************ 00:15:47.081 END TEST raid_rebuild_test_sb 00:15:47.081 ************************************ 00:15:47.081 00:15:47.081 real 0m26.153s 00:15:47.081 user 0m31.576s 00:15:47.081 sys 0m4.754s 00:15:47.081 16:31:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:47.081 16:31:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.341 16:31:28 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:15:47.341 16:31:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:47.341 16:31:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:47.341 16:31:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:47.341 ************************************ 00:15:47.341 START TEST raid_rebuild_test_io 00:15:47.341 ************************************ 00:15:47.341 16:31:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:15:47.341 16:31:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:47.341 16:31:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:47.341 16:31:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:47.341 16:31:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:15:47.341 16:31:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:47.341 16:31:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:47.341 16:31:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:47.341 16:31:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:47.341 16:31:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:47.341 16:31:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:47.341 16:31:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:47.341 16:31:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:47.341 16:31:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:47.341 16:31:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:47.341 16:31:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:47.341 16:31:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:47.341 16:31:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:47.341 16:31:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:47.341 16:31:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:47.341 16:31:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:47.341 16:31:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:47.341 16:31:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:47.341 16:31:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:47.341 16:31:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:47.341 16:31:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:47.341 16:31:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:47.341 16:31:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:47.341 16:31:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:47.341 16:31:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:47.341 16:31:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=89834 00:15:47.341 16:31:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:47.341 16:31:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 89834 00:15:47.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.341 16:31:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 89834 ']' 00:15:47.341 16:31:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.341 16:31:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:47.341 16:31:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.341 16:31:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:47.341 16:31:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:47.341 [2024-12-06 16:31:29.089878] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:15:47.341 [2024-12-06 16:31:29.090228] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89834 ] 00:15:47.341 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:47.341 Zero copy mechanism will not be used. 00:15:47.601 [2024-12-06 16:31:29.292956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.601 [2024-12-06 16:31:29.323642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.601 [2024-12-06 16:31:29.369376] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:47.601 [2024-12-06 16:31:29.369514] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:48.171 16:31:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:48.171 16:31:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:15:48.171 16:31:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:48.171 16:31:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:48.171 16:31:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.171 16:31:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.171 BaseBdev1_malloc 00:15:48.171 16:31:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.171 16:31:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:48.171 16:31:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.171 16:31:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.171 [2024-12-06 16:31:30.005207] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:48.171 [2024-12-06 16:31:30.005287] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.171 [2024-12-06 16:31:30.005311] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:48.171 [2024-12-06 16:31:30.005324] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.171 [2024-12-06 16:31:30.007533] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.430 [2024-12-06 16:31:30.007672] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:48.430 BaseBdev1 00:15:48.430 16:31:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.430 16:31:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:48.430 16:31:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:48.430 16:31:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.430 16:31:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.430 BaseBdev2_malloc 00:15:48.430 16:31:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.430 16:31:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:48.430 16:31:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.430 16:31:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.430 [2024-12-06 16:31:30.025697] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:48.430 [2024-12-06 16:31:30.025751] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.430 [2024-12-06 16:31:30.025770] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:48.430 [2024-12-06 16:31:30.025779] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.430 [2024-12-06 16:31:30.027832] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.430 [2024-12-06 16:31:30.027870] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:48.430 BaseBdev2 00:15:48.430 16:31:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.431 16:31:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:48.431 16:31:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:48.431 16:31:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.431 16:31:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.431 BaseBdev3_malloc 00:15:48.431 16:31:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.431 16:31:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:48.431 16:31:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.431 16:31:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.431 [2024-12-06 16:31:30.045999] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:48.431 [2024-12-06 16:31:30.046170] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.431 [2024-12-06 16:31:30.046194] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:48.431 [2024-12-06 16:31:30.046216] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.431 [2024-12-06 16:31:30.048282] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.431 [2024-12-06 16:31:30.048317] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:48.431 BaseBdev3 00:15:48.431 16:31:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.431 16:31:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:48.431 16:31:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:48.431 16:31:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.431 16:31:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.431 BaseBdev4_malloc 00:15:48.431 16:31:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.431 16:31:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:48.431 16:31:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.431 16:31:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.431 [2024-12-06 16:31:30.080617] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:48.431 [2024-12-06 16:31:30.080670] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.431 [2024-12-06 16:31:30.080694] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:48.431 [2024-12-06 16:31:30.080703] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.431 [2024-12-06 16:31:30.082761] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.431 [2024-12-06 16:31:30.082798] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:48.431 BaseBdev4 00:15:48.431 16:31:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.431 16:31:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:48.431 16:31:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.431 16:31:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.431 spare_malloc 00:15:48.431 16:31:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.431 16:31:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:48.431 16:31:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.431 16:31:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.431 spare_delay 00:15:48.431 16:31:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.431 16:31:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:48.431 16:31:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.431 16:31:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.431 [2024-12-06 16:31:30.120864] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:48.431 [2024-12-06 16:31:30.120912] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.431 [2024-12-06 16:31:30.120930] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:48.431 [2024-12-06 16:31:30.120939] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.431 [2024-12-06 16:31:30.122928] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.431 [2024-12-06 16:31:30.122967] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:48.431 spare 00:15:48.431 16:31:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.431 16:31:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:48.431 16:31:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.431 16:31:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.431 [2024-12-06 16:31:30.132913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:48.431 [2024-12-06 16:31:30.134718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:48.431 [2024-12-06 16:31:30.134783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:48.431 [2024-12-06 16:31:30.134820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:48.431 [2024-12-06 16:31:30.134892] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:48.431 [2024-12-06 16:31:30.134901] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:48.431 [2024-12-06 16:31:30.135135] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:48.431 [2024-12-06 16:31:30.135287] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:48.431 [2024-12-06 16:31:30.135305] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:15:48.431 [2024-12-06 16:31:30.135411] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:48.431 16:31:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.431 16:31:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:48.431 16:31:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.431 16:31:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:48.431 16:31:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:48.431 16:31:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:48.431 16:31:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:48.431 16:31:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.431 16:31:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.431 16:31:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.431 16:31:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.431 16:31:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.431 16:31:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.431 16:31:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.431 16:31:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.431 16:31:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.431 16:31:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.431 "name": "raid_bdev1", 00:15:48.431 "uuid": "88d131dd-59ea-4580-8c9d-ef6b46abeace", 00:15:48.431 "strip_size_kb": 0, 00:15:48.431 "state": "online", 00:15:48.431 "raid_level": "raid1", 00:15:48.431 "superblock": false, 00:15:48.431 "num_base_bdevs": 4, 00:15:48.431 "num_base_bdevs_discovered": 4, 00:15:48.431 "num_base_bdevs_operational": 4, 00:15:48.431 "base_bdevs_list": [ 00:15:48.431 { 00:15:48.431 "name": "BaseBdev1", 00:15:48.431 "uuid": "5683eb6b-21c8-5a87-b611-48d7da482dd5", 00:15:48.431 "is_configured": true, 00:15:48.431 "data_offset": 0, 00:15:48.431 "data_size": 65536 00:15:48.431 }, 00:15:48.431 { 00:15:48.432 "name": "BaseBdev2", 00:15:48.432 "uuid": "e668edda-f139-5351-9b8b-8dea83c6dc3a", 00:15:48.432 "is_configured": true, 00:15:48.432 "data_offset": 0, 00:15:48.432 "data_size": 65536 00:15:48.432 }, 00:15:48.432 { 00:15:48.432 "name": "BaseBdev3", 00:15:48.432 "uuid": "ebfa03b4-3485-50dd-ab2f-be4d9bcc4430", 00:15:48.432 "is_configured": true, 00:15:48.432 "data_offset": 0, 00:15:48.432 "data_size": 65536 00:15:48.432 }, 00:15:48.432 { 00:15:48.432 "name": "BaseBdev4", 00:15:48.432 "uuid": "bc3b58ba-d10c-5373-a311-93a02acd3dbf", 00:15:48.432 "is_configured": true, 00:15:48.432 "data_offset": 0, 00:15:48.432 "data_size": 65536 00:15:48.432 } 00:15:48.432 ] 00:15:48.432 }' 00:15:48.432 16:31:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.432 16:31:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:49.055 16:31:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:49.055 16:31:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:49.055 16:31:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.055 16:31:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:49.055 [2024-12-06 16:31:30.592470] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:49.055 16:31:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.055 16:31:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:15:49.055 16:31:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.055 16:31:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.055 16:31:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:49.055 16:31:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:49.055 16:31:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.055 16:31:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:49.055 16:31:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:49.055 16:31:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:49.055 16:31:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.055 16:31:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:49.055 16:31:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:49.055 [2024-12-06 16:31:30.687969] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:49.055 16:31:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.055 16:31:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:49.055 16:31:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.055 16:31:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:49.055 16:31:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:49.055 16:31:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:49.055 16:31:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:49.055 16:31:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.055 16:31:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.055 16:31:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.055 16:31:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.055 16:31:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.055 16:31:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.055 16:31:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.055 16:31:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:49.055 16:31:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.055 16:31:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.055 "name": "raid_bdev1", 00:15:49.055 "uuid": "88d131dd-59ea-4580-8c9d-ef6b46abeace", 00:15:49.055 "strip_size_kb": 0, 00:15:49.055 "state": "online", 00:15:49.055 "raid_level": "raid1", 00:15:49.055 "superblock": false, 00:15:49.055 "num_base_bdevs": 4, 00:15:49.055 "num_base_bdevs_discovered": 3, 00:15:49.055 "num_base_bdevs_operational": 3, 00:15:49.055 "base_bdevs_list": [ 00:15:49.055 { 00:15:49.055 "name": null, 00:15:49.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.055 "is_configured": false, 00:15:49.055 "data_offset": 0, 00:15:49.055 "data_size": 65536 00:15:49.055 }, 00:15:49.055 { 00:15:49.055 "name": "BaseBdev2", 00:15:49.055 "uuid": "e668edda-f139-5351-9b8b-8dea83c6dc3a", 00:15:49.055 "is_configured": true, 00:15:49.055 "data_offset": 0, 00:15:49.055 "data_size": 65536 00:15:49.055 }, 00:15:49.055 { 00:15:49.055 "name": "BaseBdev3", 00:15:49.055 "uuid": "ebfa03b4-3485-50dd-ab2f-be4d9bcc4430", 00:15:49.055 "is_configured": true, 00:15:49.055 "data_offset": 0, 00:15:49.055 "data_size": 65536 00:15:49.055 }, 00:15:49.055 { 00:15:49.055 "name": "BaseBdev4", 00:15:49.055 "uuid": "bc3b58ba-d10c-5373-a311-93a02acd3dbf", 00:15:49.055 "is_configured": true, 00:15:49.055 "data_offset": 0, 00:15:49.055 "data_size": 65536 00:15:49.055 } 00:15:49.055 ] 00:15:49.055 }' 00:15:49.055 16:31:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.055 16:31:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:49.055 [2024-12-06 16:31:30.797855] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:49.055 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:49.055 Zero copy mechanism will not be used. 00:15:49.055 Running I/O for 60 seconds... 00:15:49.320 16:31:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:49.320 16:31:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.320 16:31:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:49.320 [2024-12-06 16:31:31.143295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:49.579 16:31:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.579 16:31:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:49.579 [2024-12-06 16:31:31.199490] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:49.579 [2024-12-06 16:31:31.201547] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:49.579 [2024-12-06 16:31:31.323342] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:49.579 [2024-12-06 16:31:31.324651] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:49.839 [2024-12-06 16:31:31.542596] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:49.839 [2024-12-06 16:31:31.543016] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:50.099 [2024-12-06 16:31:31.792607] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:50.099 142.00 IOPS, 426.00 MiB/s [2024-12-06T16:31:31.938Z] [2024-12-06 16:31:31.926176] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:50.359 16:31:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:50.359 16:31:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:50.359 16:31:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:50.359 16:31:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:50.359 16:31:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:50.359 16:31:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.359 16:31:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.359 16:31:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.359 16:31:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:50.619 16:31:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.619 16:31:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:50.619 "name": "raid_bdev1", 00:15:50.619 "uuid": "88d131dd-59ea-4580-8c9d-ef6b46abeace", 00:15:50.619 "strip_size_kb": 0, 00:15:50.619 "state": "online", 00:15:50.619 "raid_level": "raid1", 00:15:50.619 "superblock": false, 00:15:50.619 "num_base_bdevs": 4, 00:15:50.619 "num_base_bdevs_discovered": 4, 00:15:50.619 "num_base_bdevs_operational": 4, 00:15:50.619 "process": { 00:15:50.619 "type": "rebuild", 00:15:50.619 "target": "spare", 00:15:50.619 "progress": { 00:15:50.619 "blocks": 14336, 00:15:50.619 "percent": 21 00:15:50.619 } 00:15:50.619 }, 00:15:50.619 "base_bdevs_list": [ 00:15:50.619 { 00:15:50.619 "name": "spare", 00:15:50.619 "uuid": "8d089aaf-d2a8-5c68-a120-66e4f03d2447", 00:15:50.619 "is_configured": true, 00:15:50.619 "data_offset": 0, 00:15:50.619 "data_size": 65536 00:15:50.619 }, 00:15:50.619 { 00:15:50.619 "name": "BaseBdev2", 00:15:50.619 "uuid": "e668edda-f139-5351-9b8b-8dea83c6dc3a", 00:15:50.619 "is_configured": true, 00:15:50.619 "data_offset": 0, 00:15:50.619 "data_size": 65536 00:15:50.619 }, 00:15:50.619 { 00:15:50.619 "name": "BaseBdev3", 00:15:50.619 "uuid": "ebfa03b4-3485-50dd-ab2f-be4d9bcc4430", 00:15:50.619 "is_configured": true, 00:15:50.619 "data_offset": 0, 00:15:50.619 "data_size": 65536 00:15:50.619 }, 00:15:50.619 { 00:15:50.619 "name": "BaseBdev4", 00:15:50.619 "uuid": "bc3b58ba-d10c-5373-a311-93a02acd3dbf", 00:15:50.619 "is_configured": true, 00:15:50.619 "data_offset": 0, 00:15:50.619 "data_size": 65536 00:15:50.619 } 00:15:50.619 ] 00:15:50.619 }' 00:15:50.619 16:31:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:50.619 [2024-12-06 16:31:32.262426] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:50.619 16:31:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:50.619 16:31:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:50.619 16:31:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:50.619 16:31:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:50.619 16:31:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.619 16:31:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:50.619 [2024-12-06 16:31:32.328712] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:50.619 [2024-12-06 16:31:32.397960] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:50.619 [2024-12-06 16:31:32.401484] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:50.619 [2024-12-06 16:31:32.401601] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:50.619 [2024-12-06 16:31:32.401620] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:50.619 [2024-12-06 16:31:32.412854] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:15:50.619 16:31:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.619 16:31:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:50.619 16:31:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:50.619 16:31:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:50.619 16:31:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:50.619 16:31:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:50.619 16:31:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:50.619 16:31:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.619 16:31:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.619 16:31:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.619 16:31:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.619 16:31:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.619 16:31:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.619 16:31:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:50.619 16:31:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.880 16:31:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.880 16:31:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.880 "name": "raid_bdev1", 00:15:50.880 "uuid": "88d131dd-59ea-4580-8c9d-ef6b46abeace", 00:15:50.880 "strip_size_kb": 0, 00:15:50.880 "state": "online", 00:15:50.880 "raid_level": "raid1", 00:15:50.880 "superblock": false, 00:15:50.880 "num_base_bdevs": 4, 00:15:50.880 "num_base_bdevs_discovered": 3, 00:15:50.880 "num_base_bdevs_operational": 3, 00:15:50.880 "base_bdevs_list": [ 00:15:50.880 { 00:15:50.880 "name": null, 00:15:50.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.880 "is_configured": false, 00:15:50.880 "data_offset": 0, 00:15:50.880 "data_size": 65536 00:15:50.880 }, 00:15:50.880 { 00:15:50.880 "name": "BaseBdev2", 00:15:50.880 "uuid": "e668edda-f139-5351-9b8b-8dea83c6dc3a", 00:15:50.880 "is_configured": true, 00:15:50.880 "data_offset": 0, 00:15:50.880 "data_size": 65536 00:15:50.880 }, 00:15:50.880 { 00:15:50.880 "name": "BaseBdev3", 00:15:50.880 "uuid": "ebfa03b4-3485-50dd-ab2f-be4d9bcc4430", 00:15:50.880 "is_configured": true, 00:15:50.880 "data_offset": 0, 00:15:50.880 "data_size": 65536 00:15:50.880 }, 00:15:50.880 { 00:15:50.880 "name": "BaseBdev4", 00:15:50.880 "uuid": "bc3b58ba-d10c-5373-a311-93a02acd3dbf", 00:15:50.880 "is_configured": true, 00:15:50.880 "data_offset": 0, 00:15:50.880 "data_size": 65536 00:15:50.880 } 00:15:50.880 ] 00:15:50.880 }' 00:15:50.880 16:31:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.880 16:31:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:51.140 152.50 IOPS, 457.50 MiB/s [2024-12-06T16:31:32.980Z] 16:31:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:51.141 16:31:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:51.141 16:31:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:51.141 16:31:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:51.141 16:31:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:51.141 16:31:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.141 16:31:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.141 16:31:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:51.141 16:31:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.141 16:31:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.141 16:31:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:51.141 "name": "raid_bdev1", 00:15:51.141 "uuid": "88d131dd-59ea-4580-8c9d-ef6b46abeace", 00:15:51.141 "strip_size_kb": 0, 00:15:51.141 "state": "online", 00:15:51.141 "raid_level": "raid1", 00:15:51.141 "superblock": false, 00:15:51.141 "num_base_bdevs": 4, 00:15:51.141 "num_base_bdevs_discovered": 3, 00:15:51.141 "num_base_bdevs_operational": 3, 00:15:51.141 "base_bdevs_list": [ 00:15:51.141 { 00:15:51.141 "name": null, 00:15:51.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.141 "is_configured": false, 00:15:51.141 "data_offset": 0, 00:15:51.141 "data_size": 65536 00:15:51.141 }, 00:15:51.141 { 00:15:51.141 "name": "BaseBdev2", 00:15:51.141 "uuid": "e668edda-f139-5351-9b8b-8dea83c6dc3a", 00:15:51.141 "is_configured": true, 00:15:51.141 "data_offset": 0, 00:15:51.141 "data_size": 65536 00:15:51.141 }, 00:15:51.141 { 00:15:51.141 "name": "BaseBdev3", 00:15:51.141 "uuid": "ebfa03b4-3485-50dd-ab2f-be4d9bcc4430", 00:15:51.141 "is_configured": true, 00:15:51.141 "data_offset": 0, 00:15:51.141 "data_size": 65536 00:15:51.141 }, 00:15:51.141 { 00:15:51.141 "name": "BaseBdev4", 00:15:51.141 "uuid": "bc3b58ba-d10c-5373-a311-93a02acd3dbf", 00:15:51.141 "is_configured": true, 00:15:51.141 "data_offset": 0, 00:15:51.141 "data_size": 65536 00:15:51.141 } 00:15:51.141 ] 00:15:51.141 }' 00:15:51.141 16:31:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:51.141 16:31:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:51.141 16:31:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:51.400 16:31:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:51.400 16:31:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:51.400 16:31:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.400 16:31:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:51.400 [2024-12-06 16:31:33.036184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:51.400 16:31:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.400 16:31:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:51.400 [2024-12-06 16:31:33.113055] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:51.400 [2024-12-06 16:31:33.115121] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:51.400 [2024-12-06 16:31:33.227000] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:51.400 [2024-12-06 16:31:33.228427] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:51.659 [2024-12-06 16:31:33.441368] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:51.660 [2024-12-06 16:31:33.441790] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:52.242 152.33 IOPS, 457.00 MiB/s [2024-12-06T16:31:34.081Z] [2024-12-06 16:31:34.034361] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:52.242 [2024-12-06 16:31:34.035001] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:52.502 16:31:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:52.502 16:31:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:52.502 16:31:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:52.502 16:31:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:52.502 16:31:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:52.502 16:31:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.502 16:31:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.502 16:31:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.502 16:31:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:52.502 16:31:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.502 16:31:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:52.502 "name": "raid_bdev1", 00:15:52.502 "uuid": "88d131dd-59ea-4580-8c9d-ef6b46abeace", 00:15:52.502 "strip_size_kb": 0, 00:15:52.502 "state": "online", 00:15:52.502 "raid_level": "raid1", 00:15:52.502 "superblock": false, 00:15:52.502 "num_base_bdevs": 4, 00:15:52.502 "num_base_bdevs_discovered": 4, 00:15:52.502 "num_base_bdevs_operational": 4, 00:15:52.502 "process": { 00:15:52.502 "type": "rebuild", 00:15:52.502 "target": "spare", 00:15:52.502 "progress": { 00:15:52.502 "blocks": 14336, 00:15:52.502 "percent": 21 00:15:52.502 } 00:15:52.502 }, 00:15:52.502 "base_bdevs_list": [ 00:15:52.502 { 00:15:52.502 "name": "spare", 00:15:52.502 "uuid": "8d089aaf-d2a8-5c68-a120-66e4f03d2447", 00:15:52.502 "is_configured": true, 00:15:52.502 "data_offset": 0, 00:15:52.502 "data_size": 65536 00:15:52.502 }, 00:15:52.502 { 00:15:52.502 "name": "BaseBdev2", 00:15:52.502 "uuid": "e668edda-f139-5351-9b8b-8dea83c6dc3a", 00:15:52.502 "is_configured": true, 00:15:52.502 "data_offset": 0, 00:15:52.502 "data_size": 65536 00:15:52.502 }, 00:15:52.502 { 00:15:52.502 "name": "BaseBdev3", 00:15:52.502 "uuid": "ebfa03b4-3485-50dd-ab2f-be4d9bcc4430", 00:15:52.503 "is_configured": true, 00:15:52.503 "data_offset": 0, 00:15:52.503 "data_size": 65536 00:15:52.503 }, 00:15:52.503 { 00:15:52.503 "name": "BaseBdev4", 00:15:52.503 "uuid": "bc3b58ba-d10c-5373-a311-93a02acd3dbf", 00:15:52.503 "is_configured": true, 00:15:52.503 "data_offset": 0, 00:15:52.503 "data_size": 65536 00:15:52.503 } 00:15:52.503 ] 00:15:52.503 }' 00:15:52.503 16:31:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:52.503 16:31:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:52.503 16:31:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:52.503 16:31:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:52.503 16:31:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:52.503 16:31:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:52.503 16:31:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:52.503 16:31:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:52.503 16:31:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:52.503 16:31:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.503 16:31:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:52.503 [2024-12-06 16:31:34.234373] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:52.503 [2024-12-06 16:31:34.237619] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:52.503 [2024-12-06 16:31:34.237909] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:52.503 [2024-12-06 16:31:34.239394] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006080 00:15:52.503 [2024-12-06 16:31:34.239422] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:15:52.503 [2024-12-06 16:31:34.253099] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:52.503 16:31:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.503 16:31:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:52.503 16:31:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:52.503 16:31:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:52.503 16:31:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:52.503 16:31:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:52.503 16:31:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:52.503 16:31:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:52.503 16:31:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.503 16:31:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.503 16:31:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.503 16:31:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:52.503 16:31:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.503 16:31:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:52.503 "name": "raid_bdev1", 00:15:52.503 "uuid": "88d131dd-59ea-4580-8c9d-ef6b46abeace", 00:15:52.503 "strip_size_kb": 0, 00:15:52.503 "state": "online", 00:15:52.503 "raid_level": "raid1", 00:15:52.503 "superblock": false, 00:15:52.503 "num_base_bdevs": 4, 00:15:52.503 "num_base_bdevs_discovered": 3, 00:15:52.503 "num_base_bdevs_operational": 3, 00:15:52.503 "process": { 00:15:52.503 "type": "rebuild", 00:15:52.503 "target": "spare", 00:15:52.503 "progress": { 00:15:52.503 "blocks": 16384, 00:15:52.503 "percent": 25 00:15:52.503 } 00:15:52.503 }, 00:15:52.503 "base_bdevs_list": [ 00:15:52.503 { 00:15:52.503 "name": "spare", 00:15:52.503 "uuid": "8d089aaf-d2a8-5c68-a120-66e4f03d2447", 00:15:52.503 "is_configured": true, 00:15:52.503 "data_offset": 0, 00:15:52.503 "data_size": 65536 00:15:52.503 }, 00:15:52.503 { 00:15:52.503 "name": null, 00:15:52.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.503 "is_configured": false, 00:15:52.503 "data_offset": 0, 00:15:52.503 "data_size": 65536 00:15:52.503 }, 00:15:52.503 { 00:15:52.503 "name": "BaseBdev3", 00:15:52.503 "uuid": "ebfa03b4-3485-50dd-ab2f-be4d9bcc4430", 00:15:52.503 "is_configured": true, 00:15:52.503 "data_offset": 0, 00:15:52.503 "data_size": 65536 00:15:52.503 }, 00:15:52.503 { 00:15:52.503 "name": "BaseBdev4", 00:15:52.503 "uuid": "bc3b58ba-d10c-5373-a311-93a02acd3dbf", 00:15:52.503 "is_configured": true, 00:15:52.503 "data_offset": 0, 00:15:52.503 "data_size": 65536 00:15:52.503 } 00:15:52.503 ] 00:15:52.503 }' 00:15:52.503 16:31:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:52.764 16:31:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:52.764 16:31:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:52.764 16:31:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:52.764 16:31:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=404 00:15:52.764 16:31:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:52.764 16:31:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:52.764 16:31:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:52.764 16:31:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:52.764 16:31:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:52.764 16:31:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:52.764 16:31:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.764 16:31:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.764 16:31:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:52.764 16:31:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.764 16:31:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.764 16:31:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:52.764 "name": "raid_bdev1", 00:15:52.764 "uuid": "88d131dd-59ea-4580-8c9d-ef6b46abeace", 00:15:52.764 "strip_size_kb": 0, 00:15:52.764 "state": "online", 00:15:52.764 "raid_level": "raid1", 00:15:52.764 "superblock": false, 00:15:52.764 "num_base_bdevs": 4, 00:15:52.764 "num_base_bdevs_discovered": 3, 00:15:52.764 "num_base_bdevs_operational": 3, 00:15:52.764 "process": { 00:15:52.764 "type": "rebuild", 00:15:52.764 "target": "spare", 00:15:52.764 "progress": { 00:15:52.764 "blocks": 16384, 00:15:52.764 "percent": 25 00:15:52.764 } 00:15:52.764 }, 00:15:52.764 "base_bdevs_list": [ 00:15:52.764 { 00:15:52.764 "name": "spare", 00:15:52.764 "uuid": "8d089aaf-d2a8-5c68-a120-66e4f03d2447", 00:15:52.764 "is_configured": true, 00:15:52.764 "data_offset": 0, 00:15:52.764 "data_size": 65536 00:15:52.764 }, 00:15:52.764 { 00:15:52.764 "name": null, 00:15:52.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.764 "is_configured": false, 00:15:52.764 "data_offset": 0, 00:15:52.764 "data_size": 65536 00:15:52.764 }, 00:15:52.764 { 00:15:52.764 "name": "BaseBdev3", 00:15:52.764 "uuid": "ebfa03b4-3485-50dd-ab2f-be4d9bcc4430", 00:15:52.764 "is_configured": true, 00:15:52.764 "data_offset": 0, 00:15:52.764 "data_size": 65536 00:15:52.764 }, 00:15:52.764 { 00:15:52.764 "name": "BaseBdev4", 00:15:52.764 "uuid": "bc3b58ba-d10c-5373-a311-93a02acd3dbf", 00:15:52.764 "is_configured": true, 00:15:52.764 "data_offset": 0, 00:15:52.764 "data_size": 65536 00:15:52.764 } 00:15:52.764 ] 00:15:52.764 }' 00:15:52.764 16:31:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:52.764 16:31:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:52.764 16:31:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:52.764 16:31:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:52.764 16:31:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:53.024 [2024-12-06 16:31:34.612995] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:53.024 [2024-12-06 16:31:34.725275] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:15:53.024 [2024-12-06 16:31:34.725839] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:15:53.320 133.50 IOPS, 400.50 MiB/s [2024-12-06T16:31:35.159Z] [2024-12-06 16:31:35.068723] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:15:53.588 [2024-12-06 16:31:35.306873] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:15:53.847 16:31:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:53.847 16:31:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:53.847 16:31:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:53.847 16:31:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:53.847 16:31:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:53.847 16:31:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:53.847 16:31:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.847 16:31:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.847 16:31:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:53.847 16:31:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.847 16:31:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.847 16:31:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:53.847 "name": "raid_bdev1", 00:15:53.847 "uuid": "88d131dd-59ea-4580-8c9d-ef6b46abeace", 00:15:53.847 "strip_size_kb": 0, 00:15:53.847 "state": "online", 00:15:53.847 "raid_level": "raid1", 00:15:53.847 "superblock": false, 00:15:53.847 "num_base_bdevs": 4, 00:15:53.847 "num_base_bdevs_discovered": 3, 00:15:53.847 "num_base_bdevs_operational": 3, 00:15:53.847 "process": { 00:15:53.847 "type": "rebuild", 00:15:53.847 "target": "spare", 00:15:53.847 "progress": { 00:15:53.847 "blocks": 30720, 00:15:53.847 "percent": 46 00:15:53.847 } 00:15:53.847 }, 00:15:53.847 "base_bdevs_list": [ 00:15:53.847 { 00:15:53.847 "name": "spare", 00:15:53.847 "uuid": "8d089aaf-d2a8-5c68-a120-66e4f03d2447", 00:15:53.847 "is_configured": true, 00:15:53.847 "data_offset": 0, 00:15:53.847 "data_size": 65536 00:15:53.847 }, 00:15:53.847 { 00:15:53.847 "name": null, 00:15:53.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.847 "is_configured": false, 00:15:53.847 "data_offset": 0, 00:15:53.847 "data_size": 65536 00:15:53.847 }, 00:15:53.847 { 00:15:53.847 "name": "BaseBdev3", 00:15:53.847 "uuid": "ebfa03b4-3485-50dd-ab2f-be4d9bcc4430", 00:15:53.847 "is_configured": true, 00:15:53.847 "data_offset": 0, 00:15:53.847 "data_size": 65536 00:15:53.847 }, 00:15:53.847 { 00:15:53.847 "name": "BaseBdev4", 00:15:53.847 "uuid": "bc3b58ba-d10c-5373-a311-93a02acd3dbf", 00:15:53.847 "is_configured": true, 00:15:53.847 "data_offset": 0, 00:15:53.847 "data_size": 65536 00:15:53.847 } 00:15:53.847 ] 00:15:53.847 }' 00:15:53.847 16:31:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:53.847 16:31:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:53.847 16:31:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:53.847 [2024-12-06 16:31:35.647562] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:15:53.847 16:31:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:53.847 16:31:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:54.106 [2024-12-06 16:31:35.748994] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:15:54.366 119.20 IOPS, 357.60 MiB/s [2024-12-06T16:31:36.205Z] [2024-12-06 16:31:35.972349] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:15:54.366 [2024-12-06 16:31:35.973421] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:15:54.366 [2024-12-06 16:31:36.175158] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:15:54.626 [2024-12-06 16:31:36.385294] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:15:54.885 16:31:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:54.885 16:31:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:54.885 16:31:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:54.885 16:31:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:54.885 16:31:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:54.885 16:31:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:54.885 16:31:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.885 16:31:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.885 16:31:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.885 16:31:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.885 16:31:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.145 16:31:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:55.145 "name": "raid_bdev1", 00:15:55.145 "uuid": "88d131dd-59ea-4580-8c9d-ef6b46abeace", 00:15:55.145 "strip_size_kb": 0, 00:15:55.145 "state": "online", 00:15:55.145 "raid_level": "raid1", 00:15:55.145 "superblock": false, 00:15:55.145 "num_base_bdevs": 4, 00:15:55.145 "num_base_bdevs_discovered": 3, 00:15:55.145 "num_base_bdevs_operational": 3, 00:15:55.145 "process": { 00:15:55.145 "type": "rebuild", 00:15:55.145 "target": "spare", 00:15:55.145 "progress": { 00:15:55.145 "blocks": 49152, 00:15:55.145 "percent": 75 00:15:55.145 } 00:15:55.145 }, 00:15:55.145 "base_bdevs_list": [ 00:15:55.145 { 00:15:55.145 "name": "spare", 00:15:55.146 "uuid": "8d089aaf-d2a8-5c68-a120-66e4f03d2447", 00:15:55.146 "is_configured": true, 00:15:55.146 "data_offset": 0, 00:15:55.146 "data_size": 65536 00:15:55.146 }, 00:15:55.146 { 00:15:55.146 "name": null, 00:15:55.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.146 "is_configured": false, 00:15:55.146 "data_offset": 0, 00:15:55.146 "data_size": 65536 00:15:55.146 }, 00:15:55.146 { 00:15:55.146 "name": "BaseBdev3", 00:15:55.146 "uuid": "ebfa03b4-3485-50dd-ab2f-be4d9bcc4430", 00:15:55.146 "is_configured": true, 00:15:55.146 "data_offset": 0, 00:15:55.146 "data_size": 65536 00:15:55.146 }, 00:15:55.146 { 00:15:55.146 "name": "BaseBdev4", 00:15:55.146 "uuid": "bc3b58ba-d10c-5373-a311-93a02acd3dbf", 00:15:55.146 "is_configured": true, 00:15:55.146 "data_offset": 0, 00:15:55.146 "data_size": 65536 00:15:55.146 } 00:15:55.146 ] 00:15:55.146 }' 00:15:55.146 16:31:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:55.146 16:31:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:55.146 16:31:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:55.146 105.00 IOPS, 315.00 MiB/s [2024-12-06T16:31:36.985Z] [2024-12-06 16:31:36.829803] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:15:55.146 16:31:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:55.146 16:31:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:55.715 [2024-12-06 16:31:37.260991] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:15:55.975 [2024-12-06 16:31:37.691271] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:55.975 [2024-12-06 16:31:37.791075] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:55.975 [2024-12-06 16:31:37.793142] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:56.235 93.71 IOPS, 281.14 MiB/s [2024-12-06T16:31:38.074Z] 16:31:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:56.235 16:31:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:56.235 16:31:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:56.235 16:31:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:56.236 16:31:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:56.236 16:31:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:56.236 16:31:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.236 16:31:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.236 16:31:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.236 16:31:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.236 16:31:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.236 16:31:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:56.236 "name": "raid_bdev1", 00:15:56.236 "uuid": "88d131dd-59ea-4580-8c9d-ef6b46abeace", 00:15:56.236 "strip_size_kb": 0, 00:15:56.236 "state": "online", 00:15:56.236 "raid_level": "raid1", 00:15:56.236 "superblock": false, 00:15:56.236 "num_base_bdevs": 4, 00:15:56.236 "num_base_bdevs_discovered": 3, 00:15:56.236 "num_base_bdevs_operational": 3, 00:15:56.236 "base_bdevs_list": [ 00:15:56.236 { 00:15:56.236 "name": "spare", 00:15:56.236 "uuid": "8d089aaf-d2a8-5c68-a120-66e4f03d2447", 00:15:56.236 "is_configured": true, 00:15:56.236 "data_offset": 0, 00:15:56.236 "data_size": 65536 00:15:56.236 }, 00:15:56.236 { 00:15:56.236 "name": null, 00:15:56.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.236 "is_configured": false, 00:15:56.236 "data_offset": 0, 00:15:56.236 "data_size": 65536 00:15:56.236 }, 00:15:56.236 { 00:15:56.236 "name": "BaseBdev3", 00:15:56.236 "uuid": "ebfa03b4-3485-50dd-ab2f-be4d9bcc4430", 00:15:56.236 "is_configured": true, 00:15:56.236 "data_offset": 0, 00:15:56.236 "data_size": 65536 00:15:56.236 }, 00:15:56.236 { 00:15:56.236 "name": "BaseBdev4", 00:15:56.236 "uuid": "bc3b58ba-d10c-5373-a311-93a02acd3dbf", 00:15:56.236 "is_configured": true, 00:15:56.236 "data_offset": 0, 00:15:56.236 "data_size": 65536 00:15:56.236 } 00:15:56.236 ] 00:15:56.236 }' 00:15:56.236 16:31:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:56.236 16:31:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:56.236 16:31:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:56.236 16:31:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:56.236 16:31:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:15:56.236 16:31:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:56.236 16:31:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:56.236 16:31:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:56.236 16:31:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:56.236 16:31:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:56.236 16:31:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.236 16:31:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.236 16:31:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.236 16:31:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.236 16:31:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.496 16:31:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:56.496 "name": "raid_bdev1", 00:15:56.496 "uuid": "88d131dd-59ea-4580-8c9d-ef6b46abeace", 00:15:56.496 "strip_size_kb": 0, 00:15:56.496 "state": "online", 00:15:56.496 "raid_level": "raid1", 00:15:56.496 "superblock": false, 00:15:56.496 "num_base_bdevs": 4, 00:15:56.496 "num_base_bdevs_discovered": 3, 00:15:56.496 "num_base_bdevs_operational": 3, 00:15:56.496 "base_bdevs_list": [ 00:15:56.496 { 00:15:56.496 "name": "spare", 00:15:56.496 "uuid": "8d089aaf-d2a8-5c68-a120-66e4f03d2447", 00:15:56.496 "is_configured": true, 00:15:56.496 "data_offset": 0, 00:15:56.496 "data_size": 65536 00:15:56.496 }, 00:15:56.496 { 00:15:56.496 "name": null, 00:15:56.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.496 "is_configured": false, 00:15:56.496 "data_offset": 0, 00:15:56.496 "data_size": 65536 00:15:56.496 }, 00:15:56.496 { 00:15:56.496 "name": "BaseBdev3", 00:15:56.496 "uuid": "ebfa03b4-3485-50dd-ab2f-be4d9bcc4430", 00:15:56.496 "is_configured": true, 00:15:56.496 "data_offset": 0, 00:15:56.496 "data_size": 65536 00:15:56.496 }, 00:15:56.496 { 00:15:56.496 "name": "BaseBdev4", 00:15:56.496 "uuid": "bc3b58ba-d10c-5373-a311-93a02acd3dbf", 00:15:56.496 "is_configured": true, 00:15:56.496 "data_offset": 0, 00:15:56.496 "data_size": 65536 00:15:56.496 } 00:15:56.496 ] 00:15:56.496 }' 00:15:56.496 16:31:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:56.496 16:31:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:56.496 16:31:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:56.496 16:31:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:56.496 16:31:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:56.496 16:31:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:56.496 16:31:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:56.496 16:31:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:56.496 16:31:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:56.496 16:31:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:56.496 16:31:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.496 16:31:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.496 16:31:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.496 16:31:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.496 16:31:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.496 16:31:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.496 16:31:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.496 16:31:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.496 16:31:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.496 16:31:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.496 "name": "raid_bdev1", 00:15:56.496 "uuid": "88d131dd-59ea-4580-8c9d-ef6b46abeace", 00:15:56.496 "strip_size_kb": 0, 00:15:56.496 "state": "online", 00:15:56.496 "raid_level": "raid1", 00:15:56.496 "superblock": false, 00:15:56.496 "num_base_bdevs": 4, 00:15:56.496 "num_base_bdevs_discovered": 3, 00:15:56.496 "num_base_bdevs_operational": 3, 00:15:56.496 "base_bdevs_list": [ 00:15:56.496 { 00:15:56.496 "name": "spare", 00:15:56.496 "uuid": "8d089aaf-d2a8-5c68-a120-66e4f03d2447", 00:15:56.496 "is_configured": true, 00:15:56.496 "data_offset": 0, 00:15:56.496 "data_size": 65536 00:15:56.496 }, 00:15:56.496 { 00:15:56.496 "name": null, 00:15:56.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.496 "is_configured": false, 00:15:56.496 "data_offset": 0, 00:15:56.496 "data_size": 65536 00:15:56.496 }, 00:15:56.496 { 00:15:56.496 "name": "BaseBdev3", 00:15:56.496 "uuid": "ebfa03b4-3485-50dd-ab2f-be4d9bcc4430", 00:15:56.496 "is_configured": true, 00:15:56.496 "data_offset": 0, 00:15:56.496 "data_size": 65536 00:15:56.496 }, 00:15:56.496 { 00:15:56.496 "name": "BaseBdev4", 00:15:56.496 "uuid": "bc3b58ba-d10c-5373-a311-93a02acd3dbf", 00:15:56.496 "is_configured": true, 00:15:56.496 "data_offset": 0, 00:15:56.496 "data_size": 65536 00:15:56.496 } 00:15:56.496 ] 00:15:56.496 }' 00:15:56.496 16:31:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.496 16:31:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:57.065 16:31:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:57.065 16:31:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.065 16:31:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:57.065 [2024-12-06 16:31:38.760858] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:57.065 [2024-12-06 16:31:38.760904] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:57.065 00:15:57.065 Latency(us) 00:15:57.065 [2024-12-06T16:31:38.904Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:57.065 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:57.065 raid_bdev1 : 8.00 87.17 261.50 0.00 0.00 15888.58 305.86 110352.32 00:15:57.065 [2024-12-06T16:31:38.904Z] =================================================================================================================== 00:15:57.065 [2024-12-06T16:31:38.904Z] Total : 87.17 261.50 0.00 0.00 15888.58 305.86 110352.32 00:15:57.065 [2024-12-06 16:31:38.784549] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:57.065 [2024-12-06 16:31:38.784692] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.065 [2024-12-06 16:31:38.784795] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:57.065 [2024-12-06 16:31:38.784808] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:57.065 { 00:15:57.065 "results": [ 00:15:57.065 { 00:15:57.065 "job": "raid_bdev1", 00:15:57.065 "core_mask": "0x1", 00:15:57.065 "workload": "randrw", 00:15:57.065 "percentage": 50, 00:15:57.065 "status": "finished", 00:15:57.065 "queue_depth": 2, 00:15:57.065 "io_size": 3145728, 00:15:57.065 "runtime": 7.996077, 00:15:57.065 "iops": 87.16774488289695, 00:15:57.065 "mibps": 261.50323464869086, 00:15:57.065 "io_failed": 0, 00:15:57.065 "io_timeout": 0, 00:15:57.065 "avg_latency_us": 15888.578338857113, 00:15:57.065 "min_latency_us": 305.8585152838428, 00:15:57.065 "max_latency_us": 110352.32139737991 00:15:57.065 } 00:15:57.065 ], 00:15:57.065 "core_count": 1 00:15:57.065 } 00:15:57.065 16:31:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.065 16:31:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.065 16:31:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.065 16:31:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:57.065 16:31:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:57.065 16:31:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.065 16:31:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:57.065 16:31:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:57.065 16:31:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:57.065 16:31:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:57.065 16:31:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:57.065 16:31:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:57.065 16:31:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:57.065 16:31:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:57.065 16:31:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:57.065 16:31:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:57.065 16:31:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:57.065 16:31:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:57.065 16:31:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:57.324 /dev/nbd0 00:15:57.324 16:31:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:57.324 16:31:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:57.324 16:31:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:57.324 16:31:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:15:57.324 16:31:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:57.324 16:31:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:57.324 16:31:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:57.324 16:31:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:15:57.324 16:31:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:57.324 16:31:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:57.324 16:31:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:57.324 1+0 records in 00:15:57.324 1+0 records out 00:15:57.324 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000532122 s, 7.7 MB/s 00:15:57.324 16:31:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:57.324 16:31:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:15:57.324 16:31:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:57.324 16:31:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:57.324 16:31:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:15:57.324 16:31:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:57.324 16:31:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:57.324 16:31:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:57.325 16:31:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:15:57.325 16:31:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:15:57.325 16:31:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:57.325 16:31:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:15:57.325 16:31:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:15:57.325 16:31:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:57.325 16:31:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:15:57.325 16:31:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:57.325 16:31:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:57.325 16:31:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:57.325 16:31:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:57.325 16:31:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:57.325 16:31:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:57.325 16:31:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:15:57.584 /dev/nbd1 00:15:57.584 16:31:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:57.584 16:31:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:57.584 16:31:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:57.584 16:31:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:15:57.584 16:31:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:57.584 16:31:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:57.584 16:31:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:57.584 16:31:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:15:57.584 16:31:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:57.584 16:31:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:57.584 16:31:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:57.584 1+0 records in 00:15:57.584 1+0 records out 00:15:57.584 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000592199 s, 6.9 MB/s 00:15:57.844 16:31:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:57.844 16:31:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:15:57.844 16:31:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:57.844 16:31:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:57.844 16:31:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:15:57.844 16:31:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:57.844 16:31:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:57.844 16:31:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:57.844 16:31:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:57.844 16:31:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:57.844 16:31:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:57.844 16:31:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:57.844 16:31:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:57.844 16:31:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:57.844 16:31:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:58.102 16:31:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:58.102 16:31:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:58.102 16:31:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:58.102 16:31:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:58.102 16:31:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:58.102 16:31:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:58.102 16:31:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:58.102 16:31:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:58.102 16:31:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:58.102 16:31:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:15:58.102 16:31:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:15:58.102 16:31:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:58.102 16:31:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:15:58.102 16:31:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:58.102 16:31:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:58.102 16:31:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:58.102 16:31:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:58.102 16:31:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:58.102 16:31:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:58.102 16:31:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:15:58.362 /dev/nbd1 00:15:58.362 16:31:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:58.362 16:31:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:58.362 16:31:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:58.362 16:31:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:15:58.362 16:31:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:58.362 16:31:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:58.362 16:31:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:58.362 16:31:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:15:58.362 16:31:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:58.362 16:31:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:58.362 16:31:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:58.362 1+0 records in 00:15:58.362 1+0 records out 00:15:58.362 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000423488 s, 9.7 MB/s 00:15:58.362 16:31:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.362 16:31:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:15:58.362 16:31:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.362 16:31:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:58.362 16:31:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:15:58.362 16:31:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:58.362 16:31:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:58.362 16:31:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:58.362 16:31:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:58.362 16:31:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:58.362 16:31:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:58.362 16:31:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:58.362 16:31:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:58.362 16:31:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:58.362 16:31:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:58.664 16:31:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:58.664 16:31:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:58.664 16:31:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:58.664 16:31:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:58.664 16:31:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:58.664 16:31:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:58.664 16:31:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:58.664 16:31:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:58.664 16:31:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:58.664 16:31:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:58.664 16:31:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:58.664 16:31:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:58.664 16:31:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:58.664 16:31:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:58.664 16:31:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:58.924 16:31:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:58.924 16:31:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:58.924 16:31:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:58.924 16:31:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:58.924 16:31:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:58.924 16:31:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:58.924 16:31:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:58.924 16:31:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:58.924 16:31:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:58.924 16:31:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 89834 00:15:58.924 16:31:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 89834 ']' 00:15:58.924 16:31:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 89834 00:15:58.924 16:31:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:15:58.924 16:31:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:58.924 16:31:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89834 00:15:58.924 16:31:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:58.924 16:31:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:58.924 16:31:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89834' 00:15:58.924 killing process with pid 89834 00:15:58.924 16:31:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 89834 00:15:58.924 Received shutdown signal, test time was about 9.891577 seconds 00:15:58.924 00:15:58.924 Latency(us) 00:15:58.924 [2024-12-06T16:31:40.763Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:58.924 [2024-12-06T16:31:40.763Z] =================================================================================================================== 00:15:58.924 [2024-12-06T16:31:40.763Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:58.924 [2024-12-06 16:31:40.672555] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:58.924 16:31:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 89834 00:15:58.924 [2024-12-06 16:31:40.718039] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:59.183 16:31:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:59.183 00:15:59.183 real 0m11.954s 00:15:59.183 user 0m15.811s 00:15:59.183 sys 0m1.928s 00:15:59.183 16:31:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:59.183 16:31:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.183 ************************************ 00:15:59.183 END TEST raid_rebuild_test_io 00:15:59.183 ************************************ 00:15:59.183 16:31:40 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:15:59.183 16:31:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:59.183 16:31:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:59.183 16:31:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:59.183 ************************************ 00:15:59.183 START TEST raid_rebuild_test_sb_io 00:15:59.183 ************************************ 00:15:59.183 16:31:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:15:59.183 16:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:59.183 16:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:59.183 16:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:59.183 16:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:15:59.183 16:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:59.183 16:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:59.183 16:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:59.184 16:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:59.184 16:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:59.184 16:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:59.184 16:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:59.184 16:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:59.184 16:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:59.184 16:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:59.184 16:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:59.184 16:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:59.184 16:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:59.184 16:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:59.184 16:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:59.184 16:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:59.184 16:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:59.184 16:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:59.184 16:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:59.184 16:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:59.184 16:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:59.184 16:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:59.184 16:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:59.184 16:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:59.184 16:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:59.184 16:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:59.184 16:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=90233 00:15:59.184 16:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:59.184 16:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 90233 00:15:59.443 16:31:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 90233 ']' 00:15:59.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:59.443 16:31:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:59.443 16:31:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:59.443 16:31:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:59.443 16:31:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:59.443 16:31:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.443 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:59.443 Zero copy mechanism will not be used. 00:15:59.443 [2024-12-06 16:31:41.098649] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:15:59.443 [2024-12-06 16:31:41.098777] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90233 ] 00:15:59.443 [2024-12-06 16:31:41.260053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.702 [2024-12-06 16:31:41.286664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.702 [2024-12-06 16:31:41.328794] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:59.702 [2024-12-06 16:31:41.328926] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:00.271 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:00.271 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:16:00.271 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:00.271 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:00.271 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.271 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:00.271 BaseBdev1_malloc 00:16:00.271 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.271 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:00.271 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.271 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:00.271 [2024-12-06 16:31:42.059418] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:00.271 [2024-12-06 16:31:42.059500] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:00.271 [2024-12-06 16:31:42.059526] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:00.271 [2024-12-06 16:31:42.059537] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:00.271 [2024-12-06 16:31:42.061709] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:00.271 [2024-12-06 16:31:42.061793] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:00.271 BaseBdev1 00:16:00.271 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.271 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:00.271 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:00.271 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.271 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:00.271 BaseBdev2_malloc 00:16:00.271 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.271 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:00.271 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.271 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:00.271 [2024-12-06 16:31:42.083893] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:00.271 [2024-12-06 16:31:42.083954] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:00.271 [2024-12-06 16:31:42.083976] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:00.271 [2024-12-06 16:31:42.083985] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:00.271 [2024-12-06 16:31:42.086103] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:00.271 [2024-12-06 16:31:42.086142] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:00.271 BaseBdev2 00:16:00.271 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.271 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:00.271 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:00.271 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.271 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:00.271 BaseBdev3_malloc 00:16:00.271 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.271 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:00.271 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.271 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:00.531 [2024-12-06 16:31:42.112574] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:00.531 [2024-12-06 16:31:42.112714] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:00.531 [2024-12-06 16:31:42.112742] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:00.531 [2024-12-06 16:31:42.112752] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:00.531 [2024-12-06 16:31:42.114837] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:00.531 [2024-12-06 16:31:42.114873] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:00.531 BaseBdev3 00:16:00.531 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.531 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:00.531 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:00.531 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.531 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:00.531 BaseBdev4_malloc 00:16:00.531 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.531 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:00.531 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.531 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:00.531 [2024-12-06 16:31:42.151224] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:00.531 [2024-12-06 16:31:42.151293] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:00.531 [2024-12-06 16:31:42.151318] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:00.531 [2024-12-06 16:31:42.151328] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:00.531 [2024-12-06 16:31:42.153447] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:00.531 [2024-12-06 16:31:42.153562] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:00.531 BaseBdev4 00:16:00.531 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.531 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:00.531 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.531 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:00.531 spare_malloc 00:16:00.531 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.531 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:00.531 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.531 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:00.531 spare_delay 00:16:00.531 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.531 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:00.531 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.531 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:00.531 [2024-12-06 16:31:42.191559] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:00.531 [2024-12-06 16:31:42.191612] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:00.531 [2024-12-06 16:31:42.191629] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:00.531 [2024-12-06 16:31:42.191637] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:00.531 [2024-12-06 16:31:42.193789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:00.531 [2024-12-06 16:31:42.193859] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:00.531 spare 00:16:00.531 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.531 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:00.531 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.531 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:00.531 [2024-12-06 16:31:42.203612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:00.531 [2024-12-06 16:31:42.205508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:00.531 [2024-12-06 16:31:42.205611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:00.531 [2024-12-06 16:31:42.205673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:00.531 [2024-12-06 16:31:42.205886] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:16:00.531 [2024-12-06 16:31:42.205903] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:00.531 [2024-12-06 16:31:42.206157] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:00.531 [2024-12-06 16:31:42.206322] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:16:00.531 [2024-12-06 16:31:42.206336] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:16:00.531 [2024-12-06 16:31:42.206439] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:00.531 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.531 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:00.531 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:00.531 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:00.531 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:00.531 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:00.531 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:00.532 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.532 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.532 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.532 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.532 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.532 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.532 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.532 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:00.532 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.532 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.532 "name": "raid_bdev1", 00:16:00.532 "uuid": "357e6e8a-ee2f-4acf-8866-0efb84ec5b36", 00:16:00.532 "strip_size_kb": 0, 00:16:00.532 "state": "online", 00:16:00.532 "raid_level": "raid1", 00:16:00.532 "superblock": true, 00:16:00.532 "num_base_bdevs": 4, 00:16:00.532 "num_base_bdevs_discovered": 4, 00:16:00.532 "num_base_bdevs_operational": 4, 00:16:00.532 "base_bdevs_list": [ 00:16:00.532 { 00:16:00.532 "name": "BaseBdev1", 00:16:00.532 "uuid": "70b4a279-8d02-59c1-9f6f-58b399fa46eb", 00:16:00.532 "is_configured": true, 00:16:00.532 "data_offset": 2048, 00:16:00.532 "data_size": 63488 00:16:00.532 }, 00:16:00.532 { 00:16:00.532 "name": "BaseBdev2", 00:16:00.532 "uuid": "80d5906c-301f-595e-a635-acb56776c383", 00:16:00.532 "is_configured": true, 00:16:00.532 "data_offset": 2048, 00:16:00.532 "data_size": 63488 00:16:00.532 }, 00:16:00.532 { 00:16:00.532 "name": "BaseBdev3", 00:16:00.532 "uuid": "7de55c60-cabf-5ce7-9e18-502a7483a000", 00:16:00.532 "is_configured": true, 00:16:00.532 "data_offset": 2048, 00:16:00.532 "data_size": 63488 00:16:00.532 }, 00:16:00.532 { 00:16:00.532 "name": "BaseBdev4", 00:16:00.532 "uuid": "05142d66-f9d5-5de6-954b-9912360cc0b2", 00:16:00.532 "is_configured": true, 00:16:00.532 "data_offset": 2048, 00:16:00.532 "data_size": 63488 00:16:00.532 } 00:16:00.532 ] 00:16:00.532 }' 00:16:00.532 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.532 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:01.156 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:01.156 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.156 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:01.156 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:01.156 [2024-12-06 16:31:42.667163] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:01.156 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.156 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:16:01.156 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.156 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.156 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:01.156 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:01.156 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.156 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:01.156 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:16:01.156 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:01.156 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:01.156 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.156 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:01.156 [2024-12-06 16:31:42.762667] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:01.156 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.156 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:01.156 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:01.156 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:01.156 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:01.156 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:01.156 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:01.156 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.156 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.156 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.157 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.157 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.157 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.157 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.157 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:01.157 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.157 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.157 "name": "raid_bdev1", 00:16:01.157 "uuid": "357e6e8a-ee2f-4acf-8866-0efb84ec5b36", 00:16:01.157 "strip_size_kb": 0, 00:16:01.157 "state": "online", 00:16:01.157 "raid_level": "raid1", 00:16:01.157 "superblock": true, 00:16:01.157 "num_base_bdevs": 4, 00:16:01.157 "num_base_bdevs_discovered": 3, 00:16:01.157 "num_base_bdevs_operational": 3, 00:16:01.157 "base_bdevs_list": [ 00:16:01.157 { 00:16:01.157 "name": null, 00:16:01.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.157 "is_configured": false, 00:16:01.157 "data_offset": 0, 00:16:01.157 "data_size": 63488 00:16:01.157 }, 00:16:01.157 { 00:16:01.157 "name": "BaseBdev2", 00:16:01.157 "uuid": "80d5906c-301f-595e-a635-acb56776c383", 00:16:01.157 "is_configured": true, 00:16:01.157 "data_offset": 2048, 00:16:01.157 "data_size": 63488 00:16:01.157 }, 00:16:01.157 { 00:16:01.157 "name": "BaseBdev3", 00:16:01.157 "uuid": "7de55c60-cabf-5ce7-9e18-502a7483a000", 00:16:01.157 "is_configured": true, 00:16:01.157 "data_offset": 2048, 00:16:01.157 "data_size": 63488 00:16:01.157 }, 00:16:01.157 { 00:16:01.157 "name": "BaseBdev4", 00:16:01.157 "uuid": "05142d66-f9d5-5de6-954b-9912360cc0b2", 00:16:01.157 "is_configured": true, 00:16:01.157 "data_offset": 2048, 00:16:01.157 "data_size": 63488 00:16:01.157 } 00:16:01.157 ] 00:16:01.157 }' 00:16:01.157 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.157 16:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:01.157 [2024-12-06 16:31:42.856600] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:01.157 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:01.157 Zero copy mechanism will not be used. 00:16:01.157 Running I/O for 60 seconds... 00:16:01.415 16:31:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:01.415 16:31:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.415 16:31:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:01.674 [2024-12-06 16:31:43.256301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:01.674 16:31:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.674 16:31:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:01.674 [2024-12-06 16:31:43.332798] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:01.674 [2024-12-06 16:31:43.334997] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:01.674 [2024-12-06 16:31:43.444901] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:01.674 [2024-12-06 16:31:43.445559] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:01.933 [2024-12-06 16:31:43.553392] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:01.933 [2024-12-06 16:31:43.554075] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:02.192 148.00 IOPS, 444.00 MiB/s [2024-12-06T16:31:44.031Z] [2024-12-06 16:31:43.878791] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:02.192 [2024-12-06 16:31:43.989965] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:02.451 [2024-12-06 16:31:44.227895] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:02.711 16:31:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:02.711 16:31:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.711 16:31:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:02.711 16:31:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:02.711 16:31:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:02.711 16:31:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.711 16:31:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.711 16:31:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:02.711 16:31:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.711 16:31:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.711 16:31:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:02.711 "name": "raid_bdev1", 00:16:02.711 "uuid": "357e6e8a-ee2f-4acf-8866-0efb84ec5b36", 00:16:02.711 "strip_size_kb": 0, 00:16:02.711 "state": "online", 00:16:02.711 "raid_level": "raid1", 00:16:02.711 "superblock": true, 00:16:02.711 "num_base_bdevs": 4, 00:16:02.711 "num_base_bdevs_discovered": 4, 00:16:02.711 "num_base_bdevs_operational": 4, 00:16:02.711 "process": { 00:16:02.711 "type": "rebuild", 00:16:02.711 "target": "spare", 00:16:02.711 "progress": { 00:16:02.711 "blocks": 14336, 00:16:02.711 "percent": 22 00:16:02.711 } 00:16:02.711 }, 00:16:02.711 "base_bdevs_list": [ 00:16:02.711 { 00:16:02.711 "name": "spare", 00:16:02.711 "uuid": "77f4b9bb-9cac-591a-912c-d8ef722058fe", 00:16:02.711 "is_configured": true, 00:16:02.711 "data_offset": 2048, 00:16:02.711 "data_size": 63488 00:16:02.711 }, 00:16:02.711 { 00:16:02.711 "name": "BaseBdev2", 00:16:02.711 "uuid": "80d5906c-301f-595e-a635-acb56776c383", 00:16:02.711 "is_configured": true, 00:16:02.711 "data_offset": 2048, 00:16:02.711 "data_size": 63488 00:16:02.711 }, 00:16:02.711 { 00:16:02.711 "name": "BaseBdev3", 00:16:02.711 "uuid": "7de55c60-cabf-5ce7-9e18-502a7483a000", 00:16:02.711 "is_configured": true, 00:16:02.711 "data_offset": 2048, 00:16:02.711 "data_size": 63488 00:16:02.711 }, 00:16:02.711 { 00:16:02.711 "name": "BaseBdev4", 00:16:02.711 "uuid": "05142d66-f9d5-5de6-954b-9912360cc0b2", 00:16:02.711 "is_configured": true, 00:16:02.711 "data_offset": 2048, 00:16:02.711 "data_size": 63488 00:16:02.711 } 00:16:02.711 ] 00:16:02.711 }' 00:16:02.711 16:31:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:02.711 16:31:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:02.711 16:31:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:02.711 16:31:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:02.711 16:31:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:02.711 16:31:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.711 16:31:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:02.711 [2024-12-06 16:31:44.450555] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:02.711 [2024-12-06 16:31:44.534372] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:02.711 [2024-12-06 16:31:44.542962] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:02.711 [2024-12-06 16:31:44.543018] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:02.711 [2024-12-06 16:31:44.543032] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:02.971 [2024-12-06 16:31:44.561202] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:16:02.971 16:31:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.971 16:31:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:02.971 16:31:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:02.971 16:31:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:02.971 16:31:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:02.971 16:31:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:02.971 16:31:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:02.971 16:31:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.971 16:31:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.971 16:31:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.971 16:31:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.971 16:31:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.971 16:31:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.971 16:31:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.971 16:31:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:02.971 16:31:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.971 16:31:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.972 "name": "raid_bdev1", 00:16:02.972 "uuid": "357e6e8a-ee2f-4acf-8866-0efb84ec5b36", 00:16:02.972 "strip_size_kb": 0, 00:16:02.972 "state": "online", 00:16:02.972 "raid_level": "raid1", 00:16:02.972 "superblock": true, 00:16:02.972 "num_base_bdevs": 4, 00:16:02.972 "num_base_bdevs_discovered": 3, 00:16:02.972 "num_base_bdevs_operational": 3, 00:16:02.972 "base_bdevs_list": [ 00:16:02.972 { 00:16:02.972 "name": null, 00:16:02.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.972 "is_configured": false, 00:16:02.972 "data_offset": 0, 00:16:02.972 "data_size": 63488 00:16:02.972 }, 00:16:02.972 { 00:16:02.972 "name": "BaseBdev2", 00:16:02.972 "uuid": "80d5906c-301f-595e-a635-acb56776c383", 00:16:02.972 "is_configured": true, 00:16:02.972 "data_offset": 2048, 00:16:02.972 "data_size": 63488 00:16:02.972 }, 00:16:02.972 { 00:16:02.972 "name": "BaseBdev3", 00:16:02.972 "uuid": "7de55c60-cabf-5ce7-9e18-502a7483a000", 00:16:02.972 "is_configured": true, 00:16:02.972 "data_offset": 2048, 00:16:02.972 "data_size": 63488 00:16:02.972 }, 00:16:02.972 { 00:16:02.972 "name": "BaseBdev4", 00:16:02.972 "uuid": "05142d66-f9d5-5de6-954b-9912360cc0b2", 00:16:02.972 "is_configured": true, 00:16:02.972 "data_offset": 2048, 00:16:02.972 "data_size": 63488 00:16:02.972 } 00:16:02.972 ] 00:16:02.972 }' 00:16:02.972 16:31:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.972 16:31:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:03.489 142.50 IOPS, 427.50 MiB/s [2024-12-06T16:31:45.329Z] 16:31:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:03.490 16:31:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:03.490 16:31:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:03.490 16:31:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:03.490 16:31:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:03.490 16:31:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.490 16:31:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.490 16:31:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.490 16:31:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:03.490 16:31:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.490 16:31:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:03.490 "name": "raid_bdev1", 00:16:03.490 "uuid": "357e6e8a-ee2f-4acf-8866-0efb84ec5b36", 00:16:03.490 "strip_size_kb": 0, 00:16:03.490 "state": "online", 00:16:03.490 "raid_level": "raid1", 00:16:03.490 "superblock": true, 00:16:03.490 "num_base_bdevs": 4, 00:16:03.490 "num_base_bdevs_discovered": 3, 00:16:03.490 "num_base_bdevs_operational": 3, 00:16:03.490 "base_bdevs_list": [ 00:16:03.490 { 00:16:03.490 "name": null, 00:16:03.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.490 "is_configured": false, 00:16:03.490 "data_offset": 0, 00:16:03.490 "data_size": 63488 00:16:03.490 }, 00:16:03.490 { 00:16:03.490 "name": "BaseBdev2", 00:16:03.490 "uuid": "80d5906c-301f-595e-a635-acb56776c383", 00:16:03.490 "is_configured": true, 00:16:03.490 "data_offset": 2048, 00:16:03.490 "data_size": 63488 00:16:03.490 }, 00:16:03.490 { 00:16:03.490 "name": "BaseBdev3", 00:16:03.490 "uuid": "7de55c60-cabf-5ce7-9e18-502a7483a000", 00:16:03.490 "is_configured": true, 00:16:03.490 "data_offset": 2048, 00:16:03.490 "data_size": 63488 00:16:03.490 }, 00:16:03.490 { 00:16:03.490 "name": "BaseBdev4", 00:16:03.490 "uuid": "05142d66-f9d5-5de6-954b-9912360cc0b2", 00:16:03.490 "is_configured": true, 00:16:03.490 "data_offset": 2048, 00:16:03.490 "data_size": 63488 00:16:03.490 } 00:16:03.490 ] 00:16:03.490 }' 00:16:03.490 16:31:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:03.490 16:31:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:03.490 16:31:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:03.490 16:31:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:03.490 16:31:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:03.490 16:31:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.490 16:31:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:03.490 [2024-12-06 16:31:45.240355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:03.490 16:31:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.490 16:31:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:03.490 [2024-12-06 16:31:45.297788] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:03.490 [2024-12-06 16:31:45.299787] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:03.748 [2024-12-06 16:31:45.424491] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:04.007 [2024-12-06 16:31:45.642509] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:04.007 [2024-12-06 16:31:45.642868] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:04.265 160.00 IOPS, 480.00 MiB/s [2024-12-06T16:31:46.104Z] [2024-12-06 16:31:46.091999] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:04.523 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:04.523 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.523 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:04.523 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:04.523 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.523 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.523 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.523 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.523 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:04.523 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.523 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.523 "name": "raid_bdev1", 00:16:04.523 "uuid": "357e6e8a-ee2f-4acf-8866-0efb84ec5b36", 00:16:04.523 "strip_size_kb": 0, 00:16:04.523 "state": "online", 00:16:04.523 "raid_level": "raid1", 00:16:04.523 "superblock": true, 00:16:04.523 "num_base_bdevs": 4, 00:16:04.523 "num_base_bdevs_discovered": 4, 00:16:04.523 "num_base_bdevs_operational": 4, 00:16:04.523 "process": { 00:16:04.523 "type": "rebuild", 00:16:04.524 "target": "spare", 00:16:04.524 "progress": { 00:16:04.524 "blocks": 10240, 00:16:04.524 "percent": 16 00:16:04.524 } 00:16:04.524 }, 00:16:04.524 "base_bdevs_list": [ 00:16:04.524 { 00:16:04.524 "name": "spare", 00:16:04.524 "uuid": "77f4b9bb-9cac-591a-912c-d8ef722058fe", 00:16:04.524 "is_configured": true, 00:16:04.524 "data_offset": 2048, 00:16:04.524 "data_size": 63488 00:16:04.524 }, 00:16:04.524 { 00:16:04.524 "name": "BaseBdev2", 00:16:04.524 "uuid": "80d5906c-301f-595e-a635-acb56776c383", 00:16:04.524 "is_configured": true, 00:16:04.524 "data_offset": 2048, 00:16:04.524 "data_size": 63488 00:16:04.524 }, 00:16:04.524 { 00:16:04.524 "name": "BaseBdev3", 00:16:04.524 "uuid": "7de55c60-cabf-5ce7-9e18-502a7483a000", 00:16:04.524 "is_configured": true, 00:16:04.524 "data_offset": 2048, 00:16:04.524 "data_size": 63488 00:16:04.524 }, 00:16:04.524 { 00:16:04.524 "name": "BaseBdev4", 00:16:04.524 "uuid": "05142d66-f9d5-5de6-954b-9912360cc0b2", 00:16:04.524 "is_configured": true, 00:16:04.524 "data_offset": 2048, 00:16:04.524 "data_size": 63488 00:16:04.524 } 00:16:04.524 ] 00:16:04.524 }' 00:16:04.524 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.782 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:04.782 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.782 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:04.782 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:04.782 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:04.782 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:04.782 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:04.782 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:04.782 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:16:04.782 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:04.782 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.782 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:04.782 [2024-12-06 16:31:46.436635] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:04.782 [2024-12-06 16:31:46.445576] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:04.782 [2024-12-06 16:31:46.586596] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:05.040 [2024-12-06 16:31:46.789783] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006080 00:16:05.040 [2024-12-06 16:31:46.789910] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:16:05.040 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.040 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:16:05.040 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:16:05.040 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:05.040 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:05.040 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:05.040 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:05.040 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:05.040 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.040 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.040 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.040 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:05.040 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.040 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:05.040 "name": "raid_bdev1", 00:16:05.040 "uuid": "357e6e8a-ee2f-4acf-8866-0efb84ec5b36", 00:16:05.040 "strip_size_kb": 0, 00:16:05.040 "state": "online", 00:16:05.040 "raid_level": "raid1", 00:16:05.040 "superblock": true, 00:16:05.040 "num_base_bdevs": 4, 00:16:05.040 "num_base_bdevs_discovered": 3, 00:16:05.040 "num_base_bdevs_operational": 3, 00:16:05.040 "process": { 00:16:05.040 "type": "rebuild", 00:16:05.040 "target": "spare", 00:16:05.040 "progress": { 00:16:05.040 "blocks": 16384, 00:16:05.040 "percent": 25 00:16:05.040 } 00:16:05.040 }, 00:16:05.040 "base_bdevs_list": [ 00:16:05.040 { 00:16:05.040 "name": "spare", 00:16:05.040 "uuid": "77f4b9bb-9cac-591a-912c-d8ef722058fe", 00:16:05.040 "is_configured": true, 00:16:05.040 "data_offset": 2048, 00:16:05.040 "data_size": 63488 00:16:05.040 }, 00:16:05.040 { 00:16:05.040 "name": null, 00:16:05.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.041 "is_configured": false, 00:16:05.041 "data_offset": 0, 00:16:05.041 "data_size": 63488 00:16:05.041 }, 00:16:05.041 { 00:16:05.041 "name": "BaseBdev3", 00:16:05.041 "uuid": "7de55c60-cabf-5ce7-9e18-502a7483a000", 00:16:05.041 "is_configured": true, 00:16:05.041 "data_offset": 2048, 00:16:05.041 "data_size": 63488 00:16:05.041 }, 00:16:05.041 { 00:16:05.041 "name": "BaseBdev4", 00:16:05.041 "uuid": "05142d66-f9d5-5de6-954b-9912360cc0b2", 00:16:05.041 "is_configured": true, 00:16:05.041 "data_offset": 2048, 00:16:05.041 "data_size": 63488 00:16:05.041 } 00:16:05.041 ] 00:16:05.041 }' 00:16:05.041 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:05.300 133.00 IOPS, 399.00 MiB/s [2024-12-06T16:31:47.139Z] 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:05.300 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:05.300 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:05.300 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=416 00:16:05.300 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:05.300 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:05.300 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:05.300 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:05.300 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:05.300 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:05.300 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.300 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.300 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:05.300 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.300 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.300 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:05.300 "name": "raid_bdev1", 00:16:05.300 "uuid": "357e6e8a-ee2f-4acf-8866-0efb84ec5b36", 00:16:05.300 "strip_size_kb": 0, 00:16:05.300 "state": "online", 00:16:05.300 "raid_level": "raid1", 00:16:05.300 "superblock": true, 00:16:05.300 "num_base_bdevs": 4, 00:16:05.300 "num_base_bdevs_discovered": 3, 00:16:05.300 "num_base_bdevs_operational": 3, 00:16:05.300 "process": { 00:16:05.300 "type": "rebuild", 00:16:05.300 "target": "spare", 00:16:05.300 "progress": { 00:16:05.300 "blocks": 18432, 00:16:05.300 "percent": 29 00:16:05.300 } 00:16:05.300 }, 00:16:05.300 "base_bdevs_list": [ 00:16:05.300 { 00:16:05.300 "name": "spare", 00:16:05.300 "uuid": "77f4b9bb-9cac-591a-912c-d8ef722058fe", 00:16:05.300 "is_configured": true, 00:16:05.300 "data_offset": 2048, 00:16:05.300 "data_size": 63488 00:16:05.300 }, 00:16:05.300 { 00:16:05.300 "name": null, 00:16:05.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.300 "is_configured": false, 00:16:05.300 "data_offset": 0, 00:16:05.300 "data_size": 63488 00:16:05.300 }, 00:16:05.300 { 00:16:05.300 "name": "BaseBdev3", 00:16:05.300 "uuid": "7de55c60-cabf-5ce7-9e18-502a7483a000", 00:16:05.300 "is_configured": true, 00:16:05.300 "data_offset": 2048, 00:16:05.300 "data_size": 63488 00:16:05.300 }, 00:16:05.300 { 00:16:05.300 "name": "BaseBdev4", 00:16:05.300 "uuid": "05142d66-f9d5-5de6-954b-9912360cc0b2", 00:16:05.300 "is_configured": true, 00:16:05.300 "data_offset": 2048, 00:16:05.300 "data_size": 63488 00:16:05.300 } 00:16:05.300 ] 00:16:05.300 }' 00:16:05.300 16:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:05.300 16:31:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:05.300 16:31:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:05.300 16:31:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:05.300 16:31:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:05.559 [2024-12-06 16:31:47.160946] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:16:05.818 [2024-12-06 16:31:47.513802] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:16:05.818 [2024-12-06 16:31:47.514836] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:16:06.079 [2024-12-06 16:31:47.747316] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:16:06.079 [2024-12-06 16:31:47.747963] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:16:06.338 117.80 IOPS, 353.40 MiB/s [2024-12-06T16:31:48.177Z] 16:31:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:06.338 16:31:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:06.338 16:31:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.338 16:31:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:06.338 16:31:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:06.338 16:31:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.338 16:31:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.338 16:31:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.338 16:31:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.338 16:31:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:06.338 16:31:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.338 16:31:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.338 "name": "raid_bdev1", 00:16:06.338 "uuid": "357e6e8a-ee2f-4acf-8866-0efb84ec5b36", 00:16:06.338 "strip_size_kb": 0, 00:16:06.338 "state": "online", 00:16:06.338 "raid_level": "raid1", 00:16:06.338 "superblock": true, 00:16:06.338 "num_base_bdevs": 4, 00:16:06.338 "num_base_bdevs_discovered": 3, 00:16:06.338 "num_base_bdevs_operational": 3, 00:16:06.338 "process": { 00:16:06.338 "type": "rebuild", 00:16:06.338 "target": "spare", 00:16:06.338 "progress": { 00:16:06.338 "blocks": 32768, 00:16:06.338 "percent": 51 00:16:06.338 } 00:16:06.338 }, 00:16:06.338 "base_bdevs_list": [ 00:16:06.338 { 00:16:06.338 "name": "spare", 00:16:06.338 "uuid": "77f4b9bb-9cac-591a-912c-d8ef722058fe", 00:16:06.338 "is_configured": true, 00:16:06.338 "data_offset": 2048, 00:16:06.338 "data_size": 63488 00:16:06.338 }, 00:16:06.338 { 00:16:06.338 "name": null, 00:16:06.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.338 "is_configured": false, 00:16:06.338 "data_offset": 0, 00:16:06.338 "data_size": 63488 00:16:06.338 }, 00:16:06.338 { 00:16:06.338 "name": "BaseBdev3", 00:16:06.338 "uuid": "7de55c60-cabf-5ce7-9e18-502a7483a000", 00:16:06.338 "is_configured": true, 00:16:06.338 "data_offset": 2048, 00:16:06.338 "data_size": 63488 00:16:06.338 }, 00:16:06.338 { 00:16:06.338 "name": "BaseBdev4", 00:16:06.338 "uuid": "05142d66-f9d5-5de6-954b-9912360cc0b2", 00:16:06.338 "is_configured": true, 00:16:06.338 "data_offset": 2048, 00:16:06.338 "data_size": 63488 00:16:06.338 } 00:16:06.338 ] 00:16:06.338 }' 00:16:06.338 16:31:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.598 16:31:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:06.598 16:31:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.598 16:31:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:06.598 16:31:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:06.858 [2024-12-06 16:31:48.512400] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:16:07.117 [2024-12-06 16:31:48.831322] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:16:07.117 104.67 IOPS, 314.00 MiB/s [2024-12-06T16:31:48.956Z] [2024-12-06 16:31:48.952690] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:16:07.687 16:31:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:07.687 16:31:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:07.687 16:31:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:07.687 16:31:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:07.687 16:31:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:07.687 16:31:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:07.687 16:31:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.687 16:31:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.687 16:31:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.687 16:31:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.687 16:31:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.687 16:31:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:07.687 "name": "raid_bdev1", 00:16:07.687 "uuid": "357e6e8a-ee2f-4acf-8866-0efb84ec5b36", 00:16:07.687 "strip_size_kb": 0, 00:16:07.687 "state": "online", 00:16:07.687 "raid_level": "raid1", 00:16:07.687 "superblock": true, 00:16:07.687 "num_base_bdevs": 4, 00:16:07.687 "num_base_bdevs_discovered": 3, 00:16:07.687 "num_base_bdevs_operational": 3, 00:16:07.687 "process": { 00:16:07.687 "type": "rebuild", 00:16:07.687 "target": "spare", 00:16:07.687 "progress": { 00:16:07.687 "blocks": 49152, 00:16:07.687 "percent": 77 00:16:07.687 } 00:16:07.687 }, 00:16:07.687 "base_bdevs_list": [ 00:16:07.687 { 00:16:07.687 "name": "spare", 00:16:07.687 "uuid": "77f4b9bb-9cac-591a-912c-d8ef722058fe", 00:16:07.687 "is_configured": true, 00:16:07.687 "data_offset": 2048, 00:16:07.687 "data_size": 63488 00:16:07.687 }, 00:16:07.687 { 00:16:07.687 "name": null, 00:16:07.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.687 "is_configured": false, 00:16:07.687 "data_offset": 0, 00:16:07.687 "data_size": 63488 00:16:07.687 }, 00:16:07.687 { 00:16:07.687 "name": "BaseBdev3", 00:16:07.687 "uuid": "7de55c60-cabf-5ce7-9e18-502a7483a000", 00:16:07.687 "is_configured": true, 00:16:07.687 "data_offset": 2048, 00:16:07.687 "data_size": 63488 00:16:07.687 }, 00:16:07.687 { 00:16:07.687 "name": "BaseBdev4", 00:16:07.687 "uuid": "05142d66-f9d5-5de6-954b-9912360cc0b2", 00:16:07.687 "is_configured": true, 00:16:07.687 "data_offset": 2048, 00:16:07.687 "data_size": 63488 00:16:07.687 } 00:16:07.687 ] 00:16:07.687 }' 00:16:07.687 16:31:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:07.687 [2024-12-06 16:31:49.305791] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:16:07.687 16:31:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:07.687 16:31:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:07.687 16:31:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:07.687 16:31:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:07.687 [2024-12-06 16:31:49.413187] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:16:07.687 [2024-12-06 16:31:49.413832] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:16:08.255 94.43 IOPS, 283.29 MiB/s [2024-12-06T16:31:50.094Z] [2024-12-06 16:31:50.091269] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:08.514 [2024-12-06 16:31:50.196549] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:08.514 [2024-12-06 16:31:50.199135] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:08.785 16:31:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:08.785 16:31:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:08.785 16:31:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.785 16:31:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:08.785 16:31:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:08.785 16:31:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.785 16:31:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.785 16:31:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.785 16:31:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.785 16:31:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:08.785 16:31:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.785 16:31:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.785 "name": "raid_bdev1", 00:16:08.785 "uuid": "357e6e8a-ee2f-4acf-8866-0efb84ec5b36", 00:16:08.785 "strip_size_kb": 0, 00:16:08.785 "state": "online", 00:16:08.785 "raid_level": "raid1", 00:16:08.785 "superblock": true, 00:16:08.785 "num_base_bdevs": 4, 00:16:08.785 "num_base_bdevs_discovered": 3, 00:16:08.785 "num_base_bdevs_operational": 3, 00:16:08.785 "base_bdevs_list": [ 00:16:08.785 { 00:16:08.785 "name": "spare", 00:16:08.785 "uuid": "77f4b9bb-9cac-591a-912c-d8ef722058fe", 00:16:08.785 "is_configured": true, 00:16:08.785 "data_offset": 2048, 00:16:08.785 "data_size": 63488 00:16:08.785 }, 00:16:08.785 { 00:16:08.785 "name": null, 00:16:08.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.785 "is_configured": false, 00:16:08.785 "data_offset": 0, 00:16:08.785 "data_size": 63488 00:16:08.785 }, 00:16:08.785 { 00:16:08.785 "name": "BaseBdev3", 00:16:08.785 "uuid": "7de55c60-cabf-5ce7-9e18-502a7483a000", 00:16:08.785 "is_configured": true, 00:16:08.785 "data_offset": 2048, 00:16:08.785 "data_size": 63488 00:16:08.785 }, 00:16:08.785 { 00:16:08.785 "name": "BaseBdev4", 00:16:08.785 "uuid": "05142d66-f9d5-5de6-954b-9912360cc0b2", 00:16:08.785 "is_configured": true, 00:16:08.785 "data_offset": 2048, 00:16:08.785 "data_size": 63488 00:16:08.785 } 00:16:08.785 ] 00:16:08.785 }' 00:16:08.785 16:31:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.785 16:31:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:08.785 16:31:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.785 16:31:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:08.785 16:31:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:16:08.785 16:31:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:08.785 16:31:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.785 16:31:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:08.785 16:31:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:08.785 16:31:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.785 16:31:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.785 16:31:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.785 16:31:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:08.785 16:31:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.785 16:31:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.785 16:31:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.785 "name": "raid_bdev1", 00:16:08.785 "uuid": "357e6e8a-ee2f-4acf-8866-0efb84ec5b36", 00:16:08.785 "strip_size_kb": 0, 00:16:08.785 "state": "online", 00:16:08.785 "raid_level": "raid1", 00:16:08.785 "superblock": true, 00:16:08.785 "num_base_bdevs": 4, 00:16:08.785 "num_base_bdevs_discovered": 3, 00:16:08.785 "num_base_bdevs_operational": 3, 00:16:08.785 "base_bdevs_list": [ 00:16:08.785 { 00:16:08.785 "name": "spare", 00:16:08.785 "uuid": "77f4b9bb-9cac-591a-912c-d8ef722058fe", 00:16:08.785 "is_configured": true, 00:16:08.785 "data_offset": 2048, 00:16:08.785 "data_size": 63488 00:16:08.785 }, 00:16:08.785 { 00:16:08.785 "name": null, 00:16:08.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.785 "is_configured": false, 00:16:08.785 "data_offset": 0, 00:16:08.785 "data_size": 63488 00:16:08.785 }, 00:16:08.785 { 00:16:08.785 "name": "BaseBdev3", 00:16:08.785 "uuid": "7de55c60-cabf-5ce7-9e18-502a7483a000", 00:16:08.785 "is_configured": true, 00:16:08.785 "data_offset": 2048, 00:16:08.785 "data_size": 63488 00:16:08.785 }, 00:16:08.785 { 00:16:08.785 "name": "BaseBdev4", 00:16:08.785 "uuid": "05142d66-f9d5-5de6-954b-9912360cc0b2", 00:16:08.785 "is_configured": true, 00:16:08.785 "data_offset": 2048, 00:16:08.785 "data_size": 63488 00:16:08.785 } 00:16:08.785 ] 00:16:08.785 }' 00:16:08.785 16:31:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.785 16:31:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:08.785 16:31:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:09.049 16:31:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:09.049 16:31:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:09.049 16:31:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:09.049 16:31:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:09.049 16:31:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:09.049 16:31:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:09.049 16:31:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:09.049 16:31:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.049 16:31:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.049 16:31:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.049 16:31:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.049 16:31:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.049 16:31:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.049 16:31:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.049 16:31:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.049 16:31:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.049 16:31:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.049 "name": "raid_bdev1", 00:16:09.049 "uuid": "357e6e8a-ee2f-4acf-8866-0efb84ec5b36", 00:16:09.049 "strip_size_kb": 0, 00:16:09.049 "state": "online", 00:16:09.049 "raid_level": "raid1", 00:16:09.049 "superblock": true, 00:16:09.049 "num_base_bdevs": 4, 00:16:09.049 "num_base_bdevs_discovered": 3, 00:16:09.049 "num_base_bdevs_operational": 3, 00:16:09.049 "base_bdevs_list": [ 00:16:09.049 { 00:16:09.049 "name": "spare", 00:16:09.049 "uuid": "77f4b9bb-9cac-591a-912c-d8ef722058fe", 00:16:09.049 "is_configured": true, 00:16:09.049 "data_offset": 2048, 00:16:09.049 "data_size": 63488 00:16:09.049 }, 00:16:09.049 { 00:16:09.049 "name": null, 00:16:09.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.049 "is_configured": false, 00:16:09.049 "data_offset": 0, 00:16:09.049 "data_size": 63488 00:16:09.049 }, 00:16:09.049 { 00:16:09.049 "name": "BaseBdev3", 00:16:09.049 "uuid": "7de55c60-cabf-5ce7-9e18-502a7483a000", 00:16:09.049 "is_configured": true, 00:16:09.049 "data_offset": 2048, 00:16:09.049 "data_size": 63488 00:16:09.049 }, 00:16:09.049 { 00:16:09.049 "name": "BaseBdev4", 00:16:09.049 "uuid": "05142d66-f9d5-5de6-954b-9912360cc0b2", 00:16:09.049 "is_configured": true, 00:16:09.049 "data_offset": 2048, 00:16:09.049 "data_size": 63488 00:16:09.049 } 00:16:09.049 ] 00:16:09.049 }' 00:16:09.049 16:31:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.049 16:31:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.309 88.00 IOPS, 264.00 MiB/s [2024-12-06T16:31:51.149Z] 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:09.310 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.310 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.310 [2024-12-06 16:31:51.115370] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:09.310 [2024-12-06 16:31:51.115416] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:09.569 00:16:09.569 Latency(us) 00:16:09.569 [2024-12-06T16:31:51.408Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:09.569 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:16:09.569 raid_bdev1 : 8.33 85.33 256.00 0.00 0.00 16401.80 300.49 115389.15 00:16:09.569 [2024-12-06T16:31:51.408Z] =================================================================================================================== 00:16:09.569 [2024-12-06T16:31:51.408Z] Total : 85.33 256.00 0.00 0.00 16401.80 300.49 115389.15 00:16:09.569 { 00:16:09.569 "results": [ 00:16:09.569 { 00:16:09.569 "job": "raid_bdev1", 00:16:09.569 "core_mask": "0x1", 00:16:09.569 "workload": "randrw", 00:16:09.569 "percentage": 50, 00:16:09.569 "status": "finished", 00:16:09.569 "queue_depth": 2, 00:16:09.569 "io_size": 3145728, 00:16:09.569 "runtime": 8.332175, 00:16:09.569 "iops": 85.33186112869689, 00:16:09.569 "mibps": 255.99558338609066, 00:16:09.569 "io_failed": 0, 00:16:09.569 "io_timeout": 0, 00:16:09.569 "avg_latency_us": 16401.80092986691, 00:16:09.570 "min_latency_us": 300.49257641921395, 00:16:09.570 "max_latency_us": 115389.14934497817 00:16:09.570 } 00:16:09.570 ], 00:16:09.570 "core_count": 1 00:16:09.570 } 00:16:09.570 [2024-12-06 16:31:51.179092] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:09.570 [2024-12-06 16:31:51.179168] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:09.570 [2024-12-06 16:31:51.179292] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:09.570 [2024-12-06 16:31:51.179309] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:16:09.570 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.570 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.570 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.570 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.570 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:16:09.570 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.570 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:09.570 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:09.570 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:16:09.570 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:16:09.570 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:09.570 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:16:09.570 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:09.570 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:09.570 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:09.570 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:09.570 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:09.570 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:09.570 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:16:09.829 /dev/nbd0 00:16:09.829 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:09.829 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:09.829 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:09.829 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:16:09.830 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:09.830 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:09.830 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:09.830 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:16:09.830 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:09.830 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:09.830 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:09.830 1+0 records in 00:16:09.830 1+0 records out 00:16:09.830 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000603303 s, 6.8 MB/s 00:16:09.830 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:09.830 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:16:09.830 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:09.830 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:09.830 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:16:09.830 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:09.830 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:09.830 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:09.830 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:16:09.830 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:16:09.830 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:09.830 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:16:09.830 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:16:09.830 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:09.830 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:16:09.830 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:09.830 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:09.830 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:09.830 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:09.830 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:09.830 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:09.830 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:16:10.090 /dev/nbd1 00:16:10.090 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:10.090 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:10.090 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:10.090 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:16:10.090 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:10.090 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:10.090 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:10.090 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:16:10.090 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:10.090 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:10.090 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:10.090 1+0 records in 00:16:10.090 1+0 records out 00:16:10.090 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348375 s, 11.8 MB/s 00:16:10.090 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:10.090 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:16:10.090 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:10.090 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:10.090 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:16:10.090 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:10.090 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:10.090 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:10.090 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:10.090 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:10.090 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:10.090 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:10.090 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:10.090 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:10.090 16:31:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:10.661 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:10.661 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:10.661 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:10.661 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:10.661 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:10.661 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:10.661 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:10.661 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:10.661 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:10.661 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:16:10.661 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:16:10.661 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:10.661 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:16:10.661 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:10.661 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:10.661 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:10.661 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:10.661 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:10.661 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:10.661 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:16:10.661 /dev/nbd1 00:16:10.661 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:10.661 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:10.661 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:10.661 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:16:10.661 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:10.661 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:10.661 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:10.661 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:16:10.661 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:10.661 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:10.661 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:10.661 1+0 records in 00:16:10.661 1+0 records out 00:16:10.661 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000453776 s, 9.0 MB/s 00:16:10.661 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:10.661 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:16:10.661 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:10.661 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:10.661 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:16:10.661 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:10.661 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:10.661 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:10.921 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:10.921 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:10.921 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:10.921 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:10.921 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:10.921 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:10.921 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:10.921 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:11.181 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:11.181 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:11.181 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:11.181 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:11.181 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:11.181 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:11.181 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:11.181 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:11.181 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:11.181 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:11.181 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:11.181 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:11.181 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:11.182 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:11.182 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:11.182 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:11.182 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:11.182 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:11.182 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:11.182 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:11.182 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:11.182 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:11.182 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:11.182 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:11.182 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.182 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.182 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.182 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:11.182 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.182 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.182 [2024-12-06 16:31:52.993616] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:11.182 [2024-12-06 16:31:52.993742] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.182 [2024-12-06 16:31:52.993787] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:11.182 [2024-12-06 16:31:52.993825] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.182 [2024-12-06 16:31:52.996342] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.182 [2024-12-06 16:31:52.996419] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:11.182 [2024-12-06 16:31:52.996576] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:11.182 [2024-12-06 16:31:52.996654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:11.182 [2024-12-06 16:31:52.996851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:11.182 [2024-12-06 16:31:52.997049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:11.182 spare 00:16:11.182 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.182 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:11.182 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.182 16:31:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.442 [2024-12-06 16:31:53.096998] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:16:11.442 [2024-12-06 16:31:53.097049] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:11.442 [2024-12-06 16:31:53.097451] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000036fc0 00:16:11.442 [2024-12-06 16:31:53.097629] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:16:11.442 [2024-12-06 16:31:53.097647] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:16:11.442 [2024-12-06 16:31:53.097820] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.442 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.442 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:11.442 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:11.442 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:11.442 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:11.442 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:11.442 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:11.442 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.442 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.442 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.442 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.442 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.442 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.442 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.442 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.442 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.442 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.442 "name": "raid_bdev1", 00:16:11.442 "uuid": "357e6e8a-ee2f-4acf-8866-0efb84ec5b36", 00:16:11.442 "strip_size_kb": 0, 00:16:11.442 "state": "online", 00:16:11.442 "raid_level": "raid1", 00:16:11.442 "superblock": true, 00:16:11.442 "num_base_bdevs": 4, 00:16:11.442 "num_base_bdevs_discovered": 3, 00:16:11.442 "num_base_bdevs_operational": 3, 00:16:11.442 "base_bdevs_list": [ 00:16:11.442 { 00:16:11.442 "name": "spare", 00:16:11.442 "uuid": "77f4b9bb-9cac-591a-912c-d8ef722058fe", 00:16:11.442 "is_configured": true, 00:16:11.442 "data_offset": 2048, 00:16:11.442 "data_size": 63488 00:16:11.442 }, 00:16:11.442 { 00:16:11.442 "name": null, 00:16:11.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.442 "is_configured": false, 00:16:11.442 "data_offset": 2048, 00:16:11.442 "data_size": 63488 00:16:11.442 }, 00:16:11.442 { 00:16:11.442 "name": "BaseBdev3", 00:16:11.442 "uuid": "7de55c60-cabf-5ce7-9e18-502a7483a000", 00:16:11.442 "is_configured": true, 00:16:11.443 "data_offset": 2048, 00:16:11.443 "data_size": 63488 00:16:11.443 }, 00:16:11.443 { 00:16:11.443 "name": "BaseBdev4", 00:16:11.443 "uuid": "05142d66-f9d5-5de6-954b-9912360cc0b2", 00:16:11.443 "is_configured": true, 00:16:11.443 "data_offset": 2048, 00:16:11.443 "data_size": 63488 00:16:11.443 } 00:16:11.443 ] 00:16:11.443 }' 00:16:11.443 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.443 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.702 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:11.702 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.702 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:11.702 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:11.702 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.702 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.702 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.702 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.702 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.962 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.962 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.962 "name": "raid_bdev1", 00:16:11.962 "uuid": "357e6e8a-ee2f-4acf-8866-0efb84ec5b36", 00:16:11.962 "strip_size_kb": 0, 00:16:11.962 "state": "online", 00:16:11.962 "raid_level": "raid1", 00:16:11.962 "superblock": true, 00:16:11.962 "num_base_bdevs": 4, 00:16:11.962 "num_base_bdevs_discovered": 3, 00:16:11.962 "num_base_bdevs_operational": 3, 00:16:11.962 "base_bdevs_list": [ 00:16:11.962 { 00:16:11.962 "name": "spare", 00:16:11.962 "uuid": "77f4b9bb-9cac-591a-912c-d8ef722058fe", 00:16:11.962 "is_configured": true, 00:16:11.962 "data_offset": 2048, 00:16:11.962 "data_size": 63488 00:16:11.962 }, 00:16:11.962 { 00:16:11.962 "name": null, 00:16:11.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.962 "is_configured": false, 00:16:11.962 "data_offset": 2048, 00:16:11.962 "data_size": 63488 00:16:11.962 }, 00:16:11.962 { 00:16:11.962 "name": "BaseBdev3", 00:16:11.962 "uuid": "7de55c60-cabf-5ce7-9e18-502a7483a000", 00:16:11.962 "is_configured": true, 00:16:11.962 "data_offset": 2048, 00:16:11.962 "data_size": 63488 00:16:11.962 }, 00:16:11.962 { 00:16:11.962 "name": "BaseBdev4", 00:16:11.962 "uuid": "05142d66-f9d5-5de6-954b-9912360cc0b2", 00:16:11.962 "is_configured": true, 00:16:11.962 "data_offset": 2048, 00:16:11.962 "data_size": 63488 00:16:11.962 } 00:16:11.962 ] 00:16:11.962 }' 00:16:11.962 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.962 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:11.962 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.962 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:11.962 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:11.962 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.962 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.962 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.962 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.962 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:11.962 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:11.962 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.962 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.962 [2024-12-06 16:31:53.716856] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:11.962 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.962 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:11.962 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:11.962 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:11.962 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:11.962 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:11.962 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:11.962 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.962 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.962 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.962 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.962 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.962 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.962 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.962 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.962 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.962 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.962 "name": "raid_bdev1", 00:16:11.962 "uuid": "357e6e8a-ee2f-4acf-8866-0efb84ec5b36", 00:16:11.962 "strip_size_kb": 0, 00:16:11.962 "state": "online", 00:16:11.962 "raid_level": "raid1", 00:16:11.963 "superblock": true, 00:16:11.963 "num_base_bdevs": 4, 00:16:11.963 "num_base_bdevs_discovered": 2, 00:16:11.963 "num_base_bdevs_operational": 2, 00:16:11.963 "base_bdevs_list": [ 00:16:11.963 { 00:16:11.963 "name": null, 00:16:11.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.963 "is_configured": false, 00:16:11.963 "data_offset": 0, 00:16:11.963 "data_size": 63488 00:16:11.963 }, 00:16:11.963 { 00:16:11.963 "name": null, 00:16:11.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.963 "is_configured": false, 00:16:11.963 "data_offset": 2048, 00:16:11.963 "data_size": 63488 00:16:11.963 }, 00:16:11.963 { 00:16:11.963 "name": "BaseBdev3", 00:16:11.963 "uuid": "7de55c60-cabf-5ce7-9e18-502a7483a000", 00:16:11.963 "is_configured": true, 00:16:11.963 "data_offset": 2048, 00:16:11.963 "data_size": 63488 00:16:11.963 }, 00:16:11.963 { 00:16:11.963 "name": "BaseBdev4", 00:16:11.963 "uuid": "05142d66-f9d5-5de6-954b-9912360cc0b2", 00:16:11.963 "is_configured": true, 00:16:11.963 "data_offset": 2048, 00:16:11.963 "data_size": 63488 00:16:11.963 } 00:16:11.963 ] 00:16:11.963 }' 00:16:11.963 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.963 16:31:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.532 16:31:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:12.532 16:31:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.532 16:31:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.532 [2024-12-06 16:31:54.188169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:12.532 [2024-12-06 16:31:54.188509] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:12.532 [2024-12-06 16:31:54.188583] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:12.532 [2024-12-06 16:31:54.188680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:12.532 [2024-12-06 16:31:54.193287] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037090 00:16:12.532 16:31:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.532 16:31:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:12.532 [2024-12-06 16:31:54.195565] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:13.472 16:31:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:13.472 16:31:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.472 16:31:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:13.472 16:31:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:13.472 16:31:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.472 16:31:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.472 16:31:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.472 16:31:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.472 16:31:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:13.472 16:31:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.472 16:31:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.472 "name": "raid_bdev1", 00:16:13.472 "uuid": "357e6e8a-ee2f-4acf-8866-0efb84ec5b36", 00:16:13.472 "strip_size_kb": 0, 00:16:13.472 "state": "online", 00:16:13.472 "raid_level": "raid1", 00:16:13.472 "superblock": true, 00:16:13.472 "num_base_bdevs": 4, 00:16:13.472 "num_base_bdevs_discovered": 3, 00:16:13.472 "num_base_bdevs_operational": 3, 00:16:13.472 "process": { 00:16:13.472 "type": "rebuild", 00:16:13.472 "target": "spare", 00:16:13.472 "progress": { 00:16:13.472 "blocks": 20480, 00:16:13.472 "percent": 32 00:16:13.472 } 00:16:13.472 }, 00:16:13.472 "base_bdevs_list": [ 00:16:13.472 { 00:16:13.472 "name": "spare", 00:16:13.472 "uuid": "77f4b9bb-9cac-591a-912c-d8ef722058fe", 00:16:13.472 "is_configured": true, 00:16:13.472 "data_offset": 2048, 00:16:13.472 "data_size": 63488 00:16:13.472 }, 00:16:13.472 { 00:16:13.472 "name": null, 00:16:13.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.472 "is_configured": false, 00:16:13.472 "data_offset": 2048, 00:16:13.472 "data_size": 63488 00:16:13.472 }, 00:16:13.472 { 00:16:13.472 "name": "BaseBdev3", 00:16:13.472 "uuid": "7de55c60-cabf-5ce7-9e18-502a7483a000", 00:16:13.472 "is_configured": true, 00:16:13.472 "data_offset": 2048, 00:16:13.472 "data_size": 63488 00:16:13.472 }, 00:16:13.472 { 00:16:13.472 "name": "BaseBdev4", 00:16:13.472 "uuid": "05142d66-f9d5-5de6-954b-9912360cc0b2", 00:16:13.472 "is_configured": true, 00:16:13.472 "data_offset": 2048, 00:16:13.472 "data_size": 63488 00:16:13.472 } 00:16:13.472 ] 00:16:13.472 }' 00:16:13.472 16:31:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.472 16:31:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:13.472 16:31:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.731 16:31:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:13.731 16:31:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:13.731 16:31:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.731 16:31:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:13.731 [2024-12-06 16:31:55.320392] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:13.731 [2024-12-06 16:31:55.400724] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:13.731 [2024-12-06 16:31:55.400803] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:13.731 [2024-12-06 16:31:55.400826] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:13.731 [2024-12-06 16:31:55.400834] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:13.731 16:31:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.731 16:31:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:13.731 16:31:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:13.731 16:31:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:13.731 16:31:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:13.731 16:31:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:13.731 16:31:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:13.731 16:31:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.731 16:31:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.731 16:31:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.731 16:31:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.731 16:31:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.731 16:31:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.731 16:31:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.731 16:31:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:13.731 16:31:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.731 16:31:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.731 "name": "raid_bdev1", 00:16:13.731 "uuid": "357e6e8a-ee2f-4acf-8866-0efb84ec5b36", 00:16:13.731 "strip_size_kb": 0, 00:16:13.731 "state": "online", 00:16:13.731 "raid_level": "raid1", 00:16:13.731 "superblock": true, 00:16:13.731 "num_base_bdevs": 4, 00:16:13.731 "num_base_bdevs_discovered": 2, 00:16:13.731 "num_base_bdevs_operational": 2, 00:16:13.731 "base_bdevs_list": [ 00:16:13.731 { 00:16:13.731 "name": null, 00:16:13.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.731 "is_configured": false, 00:16:13.731 "data_offset": 0, 00:16:13.731 "data_size": 63488 00:16:13.731 }, 00:16:13.731 { 00:16:13.731 "name": null, 00:16:13.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.731 "is_configured": false, 00:16:13.731 "data_offset": 2048, 00:16:13.731 "data_size": 63488 00:16:13.731 }, 00:16:13.731 { 00:16:13.731 "name": "BaseBdev3", 00:16:13.731 "uuid": "7de55c60-cabf-5ce7-9e18-502a7483a000", 00:16:13.731 "is_configured": true, 00:16:13.731 "data_offset": 2048, 00:16:13.731 "data_size": 63488 00:16:13.731 }, 00:16:13.731 { 00:16:13.731 "name": "BaseBdev4", 00:16:13.731 "uuid": "05142d66-f9d5-5de6-954b-9912360cc0b2", 00:16:13.731 "is_configured": true, 00:16:13.731 "data_offset": 2048, 00:16:13.731 "data_size": 63488 00:16:13.731 } 00:16:13.731 ] 00:16:13.731 }' 00:16:13.731 16:31:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.731 16:31:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.314 16:31:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:14.314 16:31:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.314 16:31:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.314 [2024-12-06 16:31:55.864877] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:14.314 [2024-12-06 16:31:55.865007] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:14.314 [2024-12-06 16:31:55.865059] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:14.314 [2024-12-06 16:31:55.865094] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:14.314 [2024-12-06 16:31:55.865625] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:14.314 [2024-12-06 16:31:55.865690] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:14.314 [2024-12-06 16:31:55.865830] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:14.314 [2024-12-06 16:31:55.865874] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:14.315 [2024-12-06 16:31:55.865926] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:14.315 [2024-12-06 16:31:55.865987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:14.315 [2024-12-06 16:31:55.870485] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:16:14.315 spare 00:16:14.315 16:31:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.315 16:31:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:14.315 [2024-12-06 16:31:55.872676] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:15.283 16:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:15.283 16:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.283 16:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:15.283 16:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:15.283 16:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.283 16:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.283 16:31:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.283 16:31:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.283 16:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.283 16:31:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.283 16:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.283 "name": "raid_bdev1", 00:16:15.283 "uuid": "357e6e8a-ee2f-4acf-8866-0efb84ec5b36", 00:16:15.283 "strip_size_kb": 0, 00:16:15.284 "state": "online", 00:16:15.284 "raid_level": "raid1", 00:16:15.284 "superblock": true, 00:16:15.284 "num_base_bdevs": 4, 00:16:15.284 "num_base_bdevs_discovered": 3, 00:16:15.284 "num_base_bdevs_operational": 3, 00:16:15.284 "process": { 00:16:15.284 "type": "rebuild", 00:16:15.284 "target": "spare", 00:16:15.284 "progress": { 00:16:15.284 "blocks": 20480, 00:16:15.284 "percent": 32 00:16:15.284 } 00:16:15.284 }, 00:16:15.284 "base_bdevs_list": [ 00:16:15.284 { 00:16:15.284 "name": "spare", 00:16:15.284 "uuid": "77f4b9bb-9cac-591a-912c-d8ef722058fe", 00:16:15.284 "is_configured": true, 00:16:15.284 "data_offset": 2048, 00:16:15.284 "data_size": 63488 00:16:15.284 }, 00:16:15.284 { 00:16:15.284 "name": null, 00:16:15.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.284 "is_configured": false, 00:16:15.284 "data_offset": 2048, 00:16:15.284 "data_size": 63488 00:16:15.284 }, 00:16:15.284 { 00:16:15.284 "name": "BaseBdev3", 00:16:15.284 "uuid": "7de55c60-cabf-5ce7-9e18-502a7483a000", 00:16:15.284 "is_configured": true, 00:16:15.284 "data_offset": 2048, 00:16:15.284 "data_size": 63488 00:16:15.284 }, 00:16:15.284 { 00:16:15.284 "name": "BaseBdev4", 00:16:15.284 "uuid": "05142d66-f9d5-5de6-954b-9912360cc0b2", 00:16:15.284 "is_configured": true, 00:16:15.284 "data_offset": 2048, 00:16:15.284 "data_size": 63488 00:16:15.284 } 00:16:15.284 ] 00:16:15.284 }' 00:16:15.284 16:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.284 16:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:15.284 16:31:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.284 16:31:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:15.284 16:31:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:15.284 16:31:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.284 16:31:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.284 [2024-12-06 16:31:57.040861] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:15.284 [2024-12-06 16:31:57.077750] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:15.284 [2024-12-06 16:31:57.077824] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:15.284 [2024-12-06 16:31:57.077841] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:15.284 [2024-12-06 16:31:57.077850] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:15.284 16:31:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.284 16:31:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:15.284 16:31:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:15.284 16:31:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:15.284 16:31:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:15.284 16:31:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:15.284 16:31:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:15.284 16:31:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.284 16:31:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.284 16:31:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.284 16:31:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.284 16:31:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.284 16:31:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.284 16:31:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.284 16:31:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.284 16:31:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.543 16:31:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.543 "name": "raid_bdev1", 00:16:15.543 "uuid": "357e6e8a-ee2f-4acf-8866-0efb84ec5b36", 00:16:15.543 "strip_size_kb": 0, 00:16:15.543 "state": "online", 00:16:15.543 "raid_level": "raid1", 00:16:15.543 "superblock": true, 00:16:15.543 "num_base_bdevs": 4, 00:16:15.543 "num_base_bdevs_discovered": 2, 00:16:15.543 "num_base_bdevs_operational": 2, 00:16:15.543 "base_bdevs_list": [ 00:16:15.543 { 00:16:15.543 "name": null, 00:16:15.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.543 "is_configured": false, 00:16:15.543 "data_offset": 0, 00:16:15.543 "data_size": 63488 00:16:15.543 }, 00:16:15.543 { 00:16:15.543 "name": null, 00:16:15.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.543 "is_configured": false, 00:16:15.543 "data_offset": 2048, 00:16:15.543 "data_size": 63488 00:16:15.543 }, 00:16:15.543 { 00:16:15.543 "name": "BaseBdev3", 00:16:15.543 "uuid": "7de55c60-cabf-5ce7-9e18-502a7483a000", 00:16:15.543 "is_configured": true, 00:16:15.543 "data_offset": 2048, 00:16:15.543 "data_size": 63488 00:16:15.543 }, 00:16:15.543 { 00:16:15.543 "name": "BaseBdev4", 00:16:15.543 "uuid": "05142d66-f9d5-5de6-954b-9912360cc0b2", 00:16:15.543 "is_configured": true, 00:16:15.543 "data_offset": 2048, 00:16:15.543 "data_size": 63488 00:16:15.543 } 00:16:15.543 ] 00:16:15.543 }' 00:16:15.543 16:31:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.543 16:31:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.802 16:31:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:15.802 16:31:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.802 16:31:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:15.802 16:31:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:15.802 16:31:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.802 16:31:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.802 16:31:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.802 16:31:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.802 16:31:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.802 16:31:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.802 16:31:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.802 "name": "raid_bdev1", 00:16:15.802 "uuid": "357e6e8a-ee2f-4acf-8866-0efb84ec5b36", 00:16:15.802 "strip_size_kb": 0, 00:16:15.802 "state": "online", 00:16:15.802 "raid_level": "raid1", 00:16:15.802 "superblock": true, 00:16:15.802 "num_base_bdevs": 4, 00:16:15.802 "num_base_bdevs_discovered": 2, 00:16:15.802 "num_base_bdevs_operational": 2, 00:16:15.802 "base_bdevs_list": [ 00:16:15.802 { 00:16:15.802 "name": null, 00:16:15.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.802 "is_configured": false, 00:16:15.802 "data_offset": 0, 00:16:15.802 "data_size": 63488 00:16:15.802 }, 00:16:15.802 { 00:16:15.802 "name": null, 00:16:15.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.802 "is_configured": false, 00:16:15.802 "data_offset": 2048, 00:16:15.802 "data_size": 63488 00:16:15.802 }, 00:16:15.802 { 00:16:15.802 "name": "BaseBdev3", 00:16:15.802 "uuid": "7de55c60-cabf-5ce7-9e18-502a7483a000", 00:16:15.802 "is_configured": true, 00:16:15.802 "data_offset": 2048, 00:16:15.802 "data_size": 63488 00:16:15.802 }, 00:16:15.802 { 00:16:15.802 "name": "BaseBdev4", 00:16:15.802 "uuid": "05142d66-f9d5-5de6-954b-9912360cc0b2", 00:16:15.802 "is_configured": true, 00:16:15.802 "data_offset": 2048, 00:16:15.802 "data_size": 63488 00:16:15.802 } 00:16:15.802 ] 00:16:15.802 }' 00:16:15.802 16:31:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.802 16:31:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:15.802 16:31:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:16.061 16:31:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:16.061 16:31:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:16.061 16:31:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.061 16:31:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:16.061 16:31:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.061 16:31:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:16.061 16:31:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.061 16:31:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:16.061 [2024-12-06 16:31:57.685491] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:16.061 [2024-12-06 16:31:57.685558] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.061 [2024-12-06 16:31:57.685577] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:16:16.061 [2024-12-06 16:31:57.685589] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.061 [2024-12-06 16:31:57.686023] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.061 [2024-12-06 16:31:57.686044] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:16.061 [2024-12-06 16:31:57.686120] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:16.061 [2024-12-06 16:31:57.686137] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:16.061 [2024-12-06 16:31:57.686145] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:16.061 [2024-12-06 16:31:57.686157] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:16.061 BaseBdev1 00:16:16.061 16:31:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.061 16:31:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:16.995 16:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:16.995 16:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:16.995 16:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:16.995 16:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:16.996 16:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:16.996 16:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:16.996 16:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.996 16:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.996 16:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.996 16:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.996 16:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.996 16:31:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.996 16:31:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:16.996 16:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.996 16:31:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.996 16:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.996 "name": "raid_bdev1", 00:16:16.996 "uuid": "357e6e8a-ee2f-4acf-8866-0efb84ec5b36", 00:16:16.996 "strip_size_kb": 0, 00:16:16.996 "state": "online", 00:16:16.996 "raid_level": "raid1", 00:16:16.996 "superblock": true, 00:16:16.996 "num_base_bdevs": 4, 00:16:16.996 "num_base_bdevs_discovered": 2, 00:16:16.996 "num_base_bdevs_operational": 2, 00:16:16.996 "base_bdevs_list": [ 00:16:16.996 { 00:16:16.996 "name": null, 00:16:16.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.996 "is_configured": false, 00:16:16.996 "data_offset": 0, 00:16:16.996 "data_size": 63488 00:16:16.996 }, 00:16:16.996 { 00:16:16.996 "name": null, 00:16:16.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.996 "is_configured": false, 00:16:16.996 "data_offset": 2048, 00:16:16.996 "data_size": 63488 00:16:16.996 }, 00:16:16.996 { 00:16:16.996 "name": "BaseBdev3", 00:16:16.996 "uuid": "7de55c60-cabf-5ce7-9e18-502a7483a000", 00:16:16.996 "is_configured": true, 00:16:16.996 "data_offset": 2048, 00:16:16.996 "data_size": 63488 00:16:16.996 }, 00:16:16.996 { 00:16:16.996 "name": "BaseBdev4", 00:16:16.996 "uuid": "05142d66-f9d5-5de6-954b-9912360cc0b2", 00:16:16.996 "is_configured": true, 00:16:16.996 "data_offset": 2048, 00:16:16.996 "data_size": 63488 00:16:16.996 } 00:16:16.996 ] 00:16:16.996 }' 00:16:16.996 16:31:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.996 16:31:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:17.562 16:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:17.562 16:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:17.562 16:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:17.562 16:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:17.562 16:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:17.562 16:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.563 16:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.563 16:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.563 16:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:17.563 16:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.563 16:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:17.563 "name": "raid_bdev1", 00:16:17.563 "uuid": "357e6e8a-ee2f-4acf-8866-0efb84ec5b36", 00:16:17.563 "strip_size_kb": 0, 00:16:17.563 "state": "online", 00:16:17.563 "raid_level": "raid1", 00:16:17.563 "superblock": true, 00:16:17.563 "num_base_bdevs": 4, 00:16:17.563 "num_base_bdevs_discovered": 2, 00:16:17.563 "num_base_bdevs_operational": 2, 00:16:17.563 "base_bdevs_list": [ 00:16:17.563 { 00:16:17.563 "name": null, 00:16:17.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.563 "is_configured": false, 00:16:17.563 "data_offset": 0, 00:16:17.563 "data_size": 63488 00:16:17.563 }, 00:16:17.563 { 00:16:17.563 "name": null, 00:16:17.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.563 "is_configured": false, 00:16:17.563 "data_offset": 2048, 00:16:17.563 "data_size": 63488 00:16:17.563 }, 00:16:17.563 { 00:16:17.563 "name": "BaseBdev3", 00:16:17.563 "uuid": "7de55c60-cabf-5ce7-9e18-502a7483a000", 00:16:17.563 "is_configured": true, 00:16:17.563 "data_offset": 2048, 00:16:17.563 "data_size": 63488 00:16:17.563 }, 00:16:17.563 { 00:16:17.563 "name": "BaseBdev4", 00:16:17.563 "uuid": "05142d66-f9d5-5de6-954b-9912360cc0b2", 00:16:17.563 "is_configured": true, 00:16:17.563 "data_offset": 2048, 00:16:17.563 "data_size": 63488 00:16:17.563 } 00:16:17.563 ] 00:16:17.563 }' 00:16:17.563 16:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:17.563 16:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:17.563 16:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:17.563 16:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:17.563 16:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:17.563 16:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:16:17.563 16:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:17.563 16:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:17.563 16:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:17.563 16:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:17.563 16:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:17.563 16:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:17.563 16:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.563 16:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:17.563 [2024-12-06 16:31:59.247983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:17.563 [2024-12-06 16:31:59.248169] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:17.563 [2024-12-06 16:31:59.248186] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:17.563 request: 00:16:17.563 { 00:16:17.563 "base_bdev": "BaseBdev1", 00:16:17.563 "raid_bdev": "raid_bdev1", 00:16:17.563 "method": "bdev_raid_add_base_bdev", 00:16:17.563 "req_id": 1 00:16:17.563 } 00:16:17.563 Got JSON-RPC error response 00:16:17.563 response: 00:16:17.563 { 00:16:17.563 "code": -22, 00:16:17.563 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:17.563 } 00:16:17.563 16:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:17.563 16:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:16:17.563 16:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:17.563 16:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:17.563 16:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:17.563 16:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:18.498 16:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:18.498 16:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:18.498 16:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:18.498 16:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:18.498 16:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:18.498 16:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:18.498 16:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.498 16:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.498 16:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.498 16:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.498 16:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.498 16:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.498 16:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.498 16:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:18.498 16:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.498 16:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.498 "name": "raid_bdev1", 00:16:18.498 "uuid": "357e6e8a-ee2f-4acf-8866-0efb84ec5b36", 00:16:18.498 "strip_size_kb": 0, 00:16:18.498 "state": "online", 00:16:18.498 "raid_level": "raid1", 00:16:18.498 "superblock": true, 00:16:18.498 "num_base_bdevs": 4, 00:16:18.498 "num_base_bdevs_discovered": 2, 00:16:18.498 "num_base_bdevs_operational": 2, 00:16:18.498 "base_bdevs_list": [ 00:16:18.498 { 00:16:18.498 "name": null, 00:16:18.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.498 "is_configured": false, 00:16:18.498 "data_offset": 0, 00:16:18.498 "data_size": 63488 00:16:18.498 }, 00:16:18.498 { 00:16:18.498 "name": null, 00:16:18.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.498 "is_configured": false, 00:16:18.498 "data_offset": 2048, 00:16:18.498 "data_size": 63488 00:16:18.498 }, 00:16:18.498 { 00:16:18.498 "name": "BaseBdev3", 00:16:18.498 "uuid": "7de55c60-cabf-5ce7-9e18-502a7483a000", 00:16:18.498 "is_configured": true, 00:16:18.498 "data_offset": 2048, 00:16:18.498 "data_size": 63488 00:16:18.498 }, 00:16:18.498 { 00:16:18.498 "name": "BaseBdev4", 00:16:18.498 "uuid": "05142d66-f9d5-5de6-954b-9912360cc0b2", 00:16:18.498 "is_configured": true, 00:16:18.498 "data_offset": 2048, 00:16:18.498 "data_size": 63488 00:16:18.498 } 00:16:18.498 ] 00:16:18.498 }' 00:16:18.498 16:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.498 16:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:19.066 16:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:19.066 16:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:19.066 16:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:19.066 16:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:19.066 16:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:19.066 16:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.066 16:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.066 16:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.066 16:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:19.066 16:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.066 16:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:19.066 "name": "raid_bdev1", 00:16:19.066 "uuid": "357e6e8a-ee2f-4acf-8866-0efb84ec5b36", 00:16:19.066 "strip_size_kb": 0, 00:16:19.066 "state": "online", 00:16:19.066 "raid_level": "raid1", 00:16:19.066 "superblock": true, 00:16:19.066 "num_base_bdevs": 4, 00:16:19.066 "num_base_bdevs_discovered": 2, 00:16:19.066 "num_base_bdevs_operational": 2, 00:16:19.066 "base_bdevs_list": [ 00:16:19.066 { 00:16:19.066 "name": null, 00:16:19.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.066 "is_configured": false, 00:16:19.066 "data_offset": 0, 00:16:19.066 "data_size": 63488 00:16:19.066 }, 00:16:19.066 { 00:16:19.066 "name": null, 00:16:19.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.066 "is_configured": false, 00:16:19.066 "data_offset": 2048, 00:16:19.066 "data_size": 63488 00:16:19.066 }, 00:16:19.066 { 00:16:19.066 "name": "BaseBdev3", 00:16:19.066 "uuid": "7de55c60-cabf-5ce7-9e18-502a7483a000", 00:16:19.066 "is_configured": true, 00:16:19.066 "data_offset": 2048, 00:16:19.066 "data_size": 63488 00:16:19.066 }, 00:16:19.066 { 00:16:19.066 "name": "BaseBdev4", 00:16:19.066 "uuid": "05142d66-f9d5-5de6-954b-9912360cc0b2", 00:16:19.066 "is_configured": true, 00:16:19.066 "data_offset": 2048, 00:16:19.066 "data_size": 63488 00:16:19.066 } 00:16:19.066 ] 00:16:19.066 }' 00:16:19.066 16:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:19.066 16:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:19.066 16:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:19.066 16:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:19.066 16:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 90233 00:16:19.066 16:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 90233 ']' 00:16:19.066 16:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 90233 00:16:19.066 16:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:16:19.066 16:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:19.066 16:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90233 00:16:19.066 killing process with pid 90233 00:16:19.066 Received shutdown signal, test time was about 18.058311 seconds 00:16:19.066 00:16:19.066 Latency(us) 00:16:19.066 [2024-12-06T16:32:00.905Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:19.066 [2024-12-06T16:32:00.905Z] =================================================================================================================== 00:16:19.066 [2024-12-06T16:32:00.905Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:19.066 16:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:19.066 16:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:19.066 16:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90233' 00:16:19.066 16:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 90233 00:16:19.066 [2024-12-06 16:32:00.882548] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:19.066 [2024-12-06 16:32:00.882689] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:19.066 16:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 90233 00:16:19.066 [2024-12-06 16:32:00.882769] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:19.066 [2024-12-06 16:32:00.882779] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:16:19.326 [2024-12-06 16:32:00.931272] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:19.326 16:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:16:19.326 00:16:19.326 real 0m20.135s 00:16:19.326 user 0m26.924s 00:16:19.326 sys 0m2.667s 00:16:19.326 ************************************ 00:16:19.326 END TEST raid_rebuild_test_sb_io 00:16:19.326 ************************************ 00:16:19.326 16:32:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:19.326 16:32:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:19.585 16:32:01 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:16:19.585 16:32:01 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:16:19.585 16:32:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:19.585 16:32:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:19.585 16:32:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:19.585 ************************************ 00:16:19.585 START TEST raid5f_state_function_test 00:16:19.585 ************************************ 00:16:19.585 16:32:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:16:19.585 16:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:19.585 16:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:16:19.585 16:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:19.585 16:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:19.585 16:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:19.585 16:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:19.585 16:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:19.585 16:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:19.585 16:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:19.585 16:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:19.585 16:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:19.585 16:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:19.585 16:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:19.585 16:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:19.585 16:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:19.585 16:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:19.585 16:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:19.585 16:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:19.585 16:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:19.585 16:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:19.585 16:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:19.585 16:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:19.585 16:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:19.585 16:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:19.585 16:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:19.585 16:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:19.585 Process raid pid: 90944 00:16:19.585 16:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=90944 00:16:19.585 16:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:19.585 16:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 90944' 00:16:19.585 16:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 90944 00:16:19.585 16:32:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 90944 ']' 00:16:19.585 16:32:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.585 16:32:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:19.585 16:32:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:19.585 16:32:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:19.585 16:32:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.585 [2024-12-06 16:32:01.305496] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:16:19.585 [2024-12-06 16:32:01.305696] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:19.844 [2024-12-06 16:32:01.480033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.844 [2024-12-06 16:32:01.505325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.844 [2024-12-06 16:32:01.547055] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:19.844 [2024-12-06 16:32:01.547166] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:20.412 16:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:20.412 16:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:16:20.412 16:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:20.412 16:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.412 16:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.412 [2024-12-06 16:32:02.177206] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:20.412 [2024-12-06 16:32:02.177330] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:20.412 [2024-12-06 16:32:02.177360] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:20.412 [2024-12-06 16:32:02.177383] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:20.412 [2024-12-06 16:32:02.177406] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:20.412 [2024-12-06 16:32:02.177420] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:20.412 16:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.412 16:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:20.412 16:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:20.412 16:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:20.412 16:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:20.412 16:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.412 16:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:20.412 16:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.412 16:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.412 16:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.412 16:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.412 16:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.412 16:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.412 16:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.412 16:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.412 16:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.412 16:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.412 "name": "Existed_Raid", 00:16:20.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.412 "strip_size_kb": 64, 00:16:20.412 "state": "configuring", 00:16:20.412 "raid_level": "raid5f", 00:16:20.412 "superblock": false, 00:16:20.412 "num_base_bdevs": 3, 00:16:20.412 "num_base_bdevs_discovered": 0, 00:16:20.412 "num_base_bdevs_operational": 3, 00:16:20.412 "base_bdevs_list": [ 00:16:20.412 { 00:16:20.412 "name": "BaseBdev1", 00:16:20.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.412 "is_configured": false, 00:16:20.412 "data_offset": 0, 00:16:20.412 "data_size": 0 00:16:20.412 }, 00:16:20.412 { 00:16:20.412 "name": "BaseBdev2", 00:16:20.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.412 "is_configured": false, 00:16:20.412 "data_offset": 0, 00:16:20.412 "data_size": 0 00:16:20.412 }, 00:16:20.412 { 00:16:20.412 "name": "BaseBdev3", 00:16:20.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.412 "is_configured": false, 00:16:20.412 "data_offset": 0, 00:16:20.412 "data_size": 0 00:16:20.412 } 00:16:20.412 ] 00:16:20.412 }' 00:16:20.412 16:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.412 16:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.981 16:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:20.981 16:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.981 16:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.981 [2024-12-06 16:32:02.632358] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:20.981 [2024-12-06 16:32:02.632456] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:16:20.981 16:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.981 16:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:20.981 16:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.981 16:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.981 [2024-12-06 16:32:02.644349] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:20.981 [2024-12-06 16:32:02.644427] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:20.981 [2024-12-06 16:32:02.644456] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:20.981 [2024-12-06 16:32:02.644479] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:20.981 [2024-12-06 16:32:02.644496] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:20.981 [2024-12-06 16:32:02.644516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:20.981 16:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.981 16:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:20.981 16:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.981 16:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.981 [2024-12-06 16:32:02.665087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:20.981 BaseBdev1 00:16:20.981 16:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.981 16:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:20.981 16:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:20.981 16:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:20.981 16:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:20.981 16:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:20.981 16:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:20.982 16:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:20.982 16:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.982 16:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.982 16:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.982 16:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:20.982 16:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.982 16:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.982 [ 00:16:20.982 { 00:16:20.982 "name": "BaseBdev1", 00:16:20.982 "aliases": [ 00:16:20.982 "b26a51cc-36a2-489a-a152-d65aa652bc81" 00:16:20.982 ], 00:16:20.982 "product_name": "Malloc disk", 00:16:20.982 "block_size": 512, 00:16:20.982 "num_blocks": 65536, 00:16:20.982 "uuid": "b26a51cc-36a2-489a-a152-d65aa652bc81", 00:16:20.982 "assigned_rate_limits": { 00:16:20.982 "rw_ios_per_sec": 0, 00:16:20.982 "rw_mbytes_per_sec": 0, 00:16:20.982 "r_mbytes_per_sec": 0, 00:16:20.982 "w_mbytes_per_sec": 0 00:16:20.982 }, 00:16:20.982 "claimed": true, 00:16:20.982 "claim_type": "exclusive_write", 00:16:20.982 "zoned": false, 00:16:20.982 "supported_io_types": { 00:16:20.982 "read": true, 00:16:20.982 "write": true, 00:16:20.982 "unmap": true, 00:16:20.982 "flush": true, 00:16:20.982 "reset": true, 00:16:20.982 "nvme_admin": false, 00:16:20.982 "nvme_io": false, 00:16:20.982 "nvme_io_md": false, 00:16:20.982 "write_zeroes": true, 00:16:20.982 "zcopy": true, 00:16:20.982 "get_zone_info": false, 00:16:20.982 "zone_management": false, 00:16:20.982 "zone_append": false, 00:16:20.982 "compare": false, 00:16:20.982 "compare_and_write": false, 00:16:20.982 "abort": true, 00:16:20.982 "seek_hole": false, 00:16:20.982 "seek_data": false, 00:16:20.982 "copy": true, 00:16:20.982 "nvme_iov_md": false 00:16:20.982 }, 00:16:20.982 "memory_domains": [ 00:16:20.982 { 00:16:20.982 "dma_device_id": "system", 00:16:20.982 "dma_device_type": 1 00:16:20.982 }, 00:16:20.982 { 00:16:20.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.982 "dma_device_type": 2 00:16:20.982 } 00:16:20.982 ], 00:16:20.982 "driver_specific": {} 00:16:20.982 } 00:16:20.982 ] 00:16:20.982 16:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.982 16:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:20.982 16:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:20.982 16:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:20.982 16:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:20.982 16:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:20.982 16:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.982 16:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:20.982 16:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.982 16:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.982 16:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.982 16:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.982 16:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.982 16:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.982 16:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.982 16:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.982 16:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.982 16:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.982 "name": "Existed_Raid", 00:16:20.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.982 "strip_size_kb": 64, 00:16:20.982 "state": "configuring", 00:16:20.982 "raid_level": "raid5f", 00:16:20.982 "superblock": false, 00:16:20.982 "num_base_bdevs": 3, 00:16:20.982 "num_base_bdevs_discovered": 1, 00:16:20.982 "num_base_bdevs_operational": 3, 00:16:20.982 "base_bdevs_list": [ 00:16:20.982 { 00:16:20.982 "name": "BaseBdev1", 00:16:20.982 "uuid": "b26a51cc-36a2-489a-a152-d65aa652bc81", 00:16:20.982 "is_configured": true, 00:16:20.982 "data_offset": 0, 00:16:20.982 "data_size": 65536 00:16:20.982 }, 00:16:20.982 { 00:16:20.982 "name": "BaseBdev2", 00:16:20.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.982 "is_configured": false, 00:16:20.982 "data_offset": 0, 00:16:20.982 "data_size": 0 00:16:20.982 }, 00:16:20.982 { 00:16:20.982 "name": "BaseBdev3", 00:16:20.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.982 "is_configured": false, 00:16:20.982 "data_offset": 0, 00:16:20.982 "data_size": 0 00:16:20.982 } 00:16:20.982 ] 00:16:20.982 }' 00:16:20.982 16:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.982 16:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.551 16:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:21.551 16:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.551 16:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.551 [2024-12-06 16:32:03.132340] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:21.551 [2024-12-06 16:32:03.132396] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:16:21.551 16:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.551 16:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:21.551 16:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.551 16:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.551 [2024-12-06 16:32:03.144375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:21.551 [2024-12-06 16:32:03.146317] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:21.551 [2024-12-06 16:32:03.146427] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:21.551 [2024-12-06 16:32:03.146441] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:21.551 [2024-12-06 16:32:03.146452] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:21.551 16:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.551 16:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:21.551 16:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:21.551 16:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:21.551 16:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:21.551 16:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:21.551 16:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:21.551 16:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:21.551 16:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:21.551 16:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.551 16:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.551 16:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.551 16:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.551 16:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.551 16:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.551 16:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.551 16:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.551 16:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.551 16:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.551 "name": "Existed_Raid", 00:16:21.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.551 "strip_size_kb": 64, 00:16:21.551 "state": "configuring", 00:16:21.551 "raid_level": "raid5f", 00:16:21.551 "superblock": false, 00:16:21.551 "num_base_bdevs": 3, 00:16:21.551 "num_base_bdevs_discovered": 1, 00:16:21.551 "num_base_bdevs_operational": 3, 00:16:21.551 "base_bdevs_list": [ 00:16:21.551 { 00:16:21.551 "name": "BaseBdev1", 00:16:21.551 "uuid": "b26a51cc-36a2-489a-a152-d65aa652bc81", 00:16:21.551 "is_configured": true, 00:16:21.551 "data_offset": 0, 00:16:21.551 "data_size": 65536 00:16:21.551 }, 00:16:21.551 { 00:16:21.551 "name": "BaseBdev2", 00:16:21.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.551 "is_configured": false, 00:16:21.551 "data_offset": 0, 00:16:21.551 "data_size": 0 00:16:21.551 }, 00:16:21.551 { 00:16:21.551 "name": "BaseBdev3", 00:16:21.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.551 "is_configured": false, 00:16:21.551 "data_offset": 0, 00:16:21.551 "data_size": 0 00:16:21.551 } 00:16:21.551 ] 00:16:21.551 }' 00:16:21.551 16:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.551 16:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.810 16:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:21.810 16:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.810 16:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.810 [2024-12-06 16:32:03.587055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:21.810 BaseBdev2 00:16:21.810 16:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.810 16:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:21.810 16:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:21.810 16:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:21.810 16:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:21.810 16:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:21.810 16:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:21.810 16:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:21.810 16:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.810 16:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.810 16:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.810 16:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:21.810 16:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.810 16:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.810 [ 00:16:21.810 { 00:16:21.810 "name": "BaseBdev2", 00:16:21.810 "aliases": [ 00:16:21.810 "3859b64e-7b6f-4512-926f-f323f895375a" 00:16:21.810 ], 00:16:21.810 "product_name": "Malloc disk", 00:16:21.810 "block_size": 512, 00:16:21.810 "num_blocks": 65536, 00:16:21.810 "uuid": "3859b64e-7b6f-4512-926f-f323f895375a", 00:16:21.810 "assigned_rate_limits": { 00:16:21.810 "rw_ios_per_sec": 0, 00:16:21.810 "rw_mbytes_per_sec": 0, 00:16:21.810 "r_mbytes_per_sec": 0, 00:16:21.810 "w_mbytes_per_sec": 0 00:16:21.810 }, 00:16:21.810 "claimed": true, 00:16:21.810 "claim_type": "exclusive_write", 00:16:21.810 "zoned": false, 00:16:21.810 "supported_io_types": { 00:16:21.810 "read": true, 00:16:21.810 "write": true, 00:16:21.810 "unmap": true, 00:16:21.810 "flush": true, 00:16:21.810 "reset": true, 00:16:21.810 "nvme_admin": false, 00:16:21.810 "nvme_io": false, 00:16:21.810 "nvme_io_md": false, 00:16:21.810 "write_zeroes": true, 00:16:21.810 "zcopy": true, 00:16:21.810 "get_zone_info": false, 00:16:21.810 "zone_management": false, 00:16:21.810 "zone_append": false, 00:16:21.810 "compare": false, 00:16:21.810 "compare_and_write": false, 00:16:21.810 "abort": true, 00:16:21.810 "seek_hole": false, 00:16:21.810 "seek_data": false, 00:16:21.810 "copy": true, 00:16:21.810 "nvme_iov_md": false 00:16:21.810 }, 00:16:21.810 "memory_domains": [ 00:16:21.810 { 00:16:21.810 "dma_device_id": "system", 00:16:21.810 "dma_device_type": 1 00:16:21.810 }, 00:16:21.810 { 00:16:21.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.810 "dma_device_type": 2 00:16:21.810 } 00:16:21.810 ], 00:16:21.810 "driver_specific": {} 00:16:21.810 } 00:16:21.810 ] 00:16:21.810 16:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.810 16:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:21.810 16:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:21.810 16:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:21.810 16:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:21.810 16:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:21.810 16:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:21.810 16:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:21.810 16:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:21.810 16:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:21.810 16:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.810 16:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.810 16:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.810 16:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.810 16:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.810 16:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.810 16:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.810 16:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.069 16:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.069 16:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.069 "name": "Existed_Raid", 00:16:22.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.069 "strip_size_kb": 64, 00:16:22.069 "state": "configuring", 00:16:22.069 "raid_level": "raid5f", 00:16:22.069 "superblock": false, 00:16:22.069 "num_base_bdevs": 3, 00:16:22.069 "num_base_bdevs_discovered": 2, 00:16:22.069 "num_base_bdevs_operational": 3, 00:16:22.069 "base_bdevs_list": [ 00:16:22.069 { 00:16:22.069 "name": "BaseBdev1", 00:16:22.069 "uuid": "b26a51cc-36a2-489a-a152-d65aa652bc81", 00:16:22.069 "is_configured": true, 00:16:22.069 "data_offset": 0, 00:16:22.069 "data_size": 65536 00:16:22.069 }, 00:16:22.069 { 00:16:22.069 "name": "BaseBdev2", 00:16:22.069 "uuid": "3859b64e-7b6f-4512-926f-f323f895375a", 00:16:22.069 "is_configured": true, 00:16:22.069 "data_offset": 0, 00:16:22.069 "data_size": 65536 00:16:22.069 }, 00:16:22.069 { 00:16:22.069 "name": "BaseBdev3", 00:16:22.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.069 "is_configured": false, 00:16:22.069 "data_offset": 0, 00:16:22.069 "data_size": 0 00:16:22.069 } 00:16:22.069 ] 00:16:22.069 }' 00:16:22.069 16:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.069 16:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.333 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:22.333 16:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.333 16:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.333 [2024-12-06 16:32:04.098120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:22.333 [2024-12-06 16:32:04.098188] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:16:22.333 [2024-12-06 16:32:04.098220] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:22.333 [2024-12-06 16:32:04.098540] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:22.333 [2024-12-06 16:32:04.099150] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:16:22.333 [2024-12-06 16:32:04.099169] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:16:22.333 [2024-12-06 16:32:04.099424] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:22.333 BaseBdev3 00:16:22.333 16:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.333 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:22.333 16:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:22.333 16:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:22.333 16:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:22.333 16:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:22.333 16:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:22.333 16:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:22.333 16:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.333 16:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.333 16:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.333 16:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:22.333 16:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.333 16:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.333 [ 00:16:22.333 { 00:16:22.333 "name": "BaseBdev3", 00:16:22.333 "aliases": [ 00:16:22.333 "3f61b65b-bc29-4f6c-858e-6c96068eb2c7" 00:16:22.333 ], 00:16:22.333 "product_name": "Malloc disk", 00:16:22.333 "block_size": 512, 00:16:22.333 "num_blocks": 65536, 00:16:22.333 "uuid": "3f61b65b-bc29-4f6c-858e-6c96068eb2c7", 00:16:22.333 "assigned_rate_limits": { 00:16:22.333 "rw_ios_per_sec": 0, 00:16:22.333 "rw_mbytes_per_sec": 0, 00:16:22.333 "r_mbytes_per_sec": 0, 00:16:22.333 "w_mbytes_per_sec": 0 00:16:22.333 }, 00:16:22.333 "claimed": true, 00:16:22.333 "claim_type": "exclusive_write", 00:16:22.333 "zoned": false, 00:16:22.333 "supported_io_types": { 00:16:22.333 "read": true, 00:16:22.333 "write": true, 00:16:22.333 "unmap": true, 00:16:22.333 "flush": true, 00:16:22.333 "reset": true, 00:16:22.333 "nvme_admin": false, 00:16:22.333 "nvme_io": false, 00:16:22.333 "nvme_io_md": false, 00:16:22.333 "write_zeroes": true, 00:16:22.333 "zcopy": true, 00:16:22.333 "get_zone_info": false, 00:16:22.333 "zone_management": false, 00:16:22.333 "zone_append": false, 00:16:22.333 "compare": false, 00:16:22.333 "compare_and_write": false, 00:16:22.333 "abort": true, 00:16:22.333 "seek_hole": false, 00:16:22.333 "seek_data": false, 00:16:22.333 "copy": true, 00:16:22.333 "nvme_iov_md": false 00:16:22.333 }, 00:16:22.333 "memory_domains": [ 00:16:22.333 { 00:16:22.333 "dma_device_id": "system", 00:16:22.333 "dma_device_type": 1 00:16:22.333 }, 00:16:22.333 { 00:16:22.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.333 "dma_device_type": 2 00:16:22.333 } 00:16:22.333 ], 00:16:22.333 "driver_specific": {} 00:16:22.333 } 00:16:22.333 ] 00:16:22.333 16:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.333 16:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:22.333 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:22.333 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:22.333 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:22.333 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:22.333 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:22.333 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.333 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.333 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:22.333 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.333 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.333 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.333 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.333 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.333 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.333 16:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.333 16:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.333 16:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.604 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.604 "name": "Existed_Raid", 00:16:22.604 "uuid": "89429326-d83e-4a98-815e-e90abb69cef1", 00:16:22.604 "strip_size_kb": 64, 00:16:22.604 "state": "online", 00:16:22.604 "raid_level": "raid5f", 00:16:22.604 "superblock": false, 00:16:22.604 "num_base_bdevs": 3, 00:16:22.604 "num_base_bdevs_discovered": 3, 00:16:22.604 "num_base_bdevs_operational": 3, 00:16:22.604 "base_bdevs_list": [ 00:16:22.604 { 00:16:22.604 "name": "BaseBdev1", 00:16:22.604 "uuid": "b26a51cc-36a2-489a-a152-d65aa652bc81", 00:16:22.604 "is_configured": true, 00:16:22.604 "data_offset": 0, 00:16:22.604 "data_size": 65536 00:16:22.604 }, 00:16:22.604 { 00:16:22.604 "name": "BaseBdev2", 00:16:22.604 "uuid": "3859b64e-7b6f-4512-926f-f323f895375a", 00:16:22.604 "is_configured": true, 00:16:22.604 "data_offset": 0, 00:16:22.604 "data_size": 65536 00:16:22.604 }, 00:16:22.604 { 00:16:22.604 "name": "BaseBdev3", 00:16:22.604 "uuid": "3f61b65b-bc29-4f6c-858e-6c96068eb2c7", 00:16:22.604 "is_configured": true, 00:16:22.604 "data_offset": 0, 00:16:22.604 "data_size": 65536 00:16:22.604 } 00:16:22.604 ] 00:16:22.604 }' 00:16:22.604 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.604 16:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.867 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:22.867 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:22.867 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:22.867 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:22.867 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:22.867 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:22.868 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:22.868 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:22.868 16:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.868 16:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.868 [2024-12-06 16:32:04.609555] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:22.868 16:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.868 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:22.868 "name": "Existed_Raid", 00:16:22.868 "aliases": [ 00:16:22.868 "89429326-d83e-4a98-815e-e90abb69cef1" 00:16:22.868 ], 00:16:22.868 "product_name": "Raid Volume", 00:16:22.868 "block_size": 512, 00:16:22.868 "num_blocks": 131072, 00:16:22.868 "uuid": "89429326-d83e-4a98-815e-e90abb69cef1", 00:16:22.868 "assigned_rate_limits": { 00:16:22.868 "rw_ios_per_sec": 0, 00:16:22.868 "rw_mbytes_per_sec": 0, 00:16:22.868 "r_mbytes_per_sec": 0, 00:16:22.868 "w_mbytes_per_sec": 0 00:16:22.868 }, 00:16:22.868 "claimed": false, 00:16:22.868 "zoned": false, 00:16:22.868 "supported_io_types": { 00:16:22.868 "read": true, 00:16:22.868 "write": true, 00:16:22.868 "unmap": false, 00:16:22.868 "flush": false, 00:16:22.868 "reset": true, 00:16:22.868 "nvme_admin": false, 00:16:22.868 "nvme_io": false, 00:16:22.868 "nvme_io_md": false, 00:16:22.868 "write_zeroes": true, 00:16:22.868 "zcopy": false, 00:16:22.868 "get_zone_info": false, 00:16:22.868 "zone_management": false, 00:16:22.868 "zone_append": false, 00:16:22.868 "compare": false, 00:16:22.868 "compare_and_write": false, 00:16:22.868 "abort": false, 00:16:22.868 "seek_hole": false, 00:16:22.868 "seek_data": false, 00:16:22.868 "copy": false, 00:16:22.868 "nvme_iov_md": false 00:16:22.868 }, 00:16:22.868 "driver_specific": { 00:16:22.868 "raid": { 00:16:22.868 "uuid": "89429326-d83e-4a98-815e-e90abb69cef1", 00:16:22.868 "strip_size_kb": 64, 00:16:22.868 "state": "online", 00:16:22.868 "raid_level": "raid5f", 00:16:22.868 "superblock": false, 00:16:22.868 "num_base_bdevs": 3, 00:16:22.868 "num_base_bdevs_discovered": 3, 00:16:22.868 "num_base_bdevs_operational": 3, 00:16:22.868 "base_bdevs_list": [ 00:16:22.868 { 00:16:22.868 "name": "BaseBdev1", 00:16:22.868 "uuid": "b26a51cc-36a2-489a-a152-d65aa652bc81", 00:16:22.868 "is_configured": true, 00:16:22.868 "data_offset": 0, 00:16:22.868 "data_size": 65536 00:16:22.868 }, 00:16:22.868 { 00:16:22.868 "name": "BaseBdev2", 00:16:22.868 "uuid": "3859b64e-7b6f-4512-926f-f323f895375a", 00:16:22.868 "is_configured": true, 00:16:22.868 "data_offset": 0, 00:16:22.868 "data_size": 65536 00:16:22.868 }, 00:16:22.868 { 00:16:22.868 "name": "BaseBdev3", 00:16:22.868 "uuid": "3f61b65b-bc29-4f6c-858e-6c96068eb2c7", 00:16:22.868 "is_configured": true, 00:16:22.868 "data_offset": 0, 00:16:22.868 "data_size": 65536 00:16:22.868 } 00:16:22.868 ] 00:16:22.868 } 00:16:22.868 } 00:16:22.868 }' 00:16:22.868 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:22.868 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:22.868 BaseBdev2 00:16:22.868 BaseBdev3' 00:16:22.868 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.129 [2024-12-06 16:32:04.896906] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.129 "name": "Existed_Raid", 00:16:23.129 "uuid": "89429326-d83e-4a98-815e-e90abb69cef1", 00:16:23.129 "strip_size_kb": 64, 00:16:23.129 "state": "online", 00:16:23.129 "raid_level": "raid5f", 00:16:23.129 "superblock": false, 00:16:23.129 "num_base_bdevs": 3, 00:16:23.129 "num_base_bdevs_discovered": 2, 00:16:23.129 "num_base_bdevs_operational": 2, 00:16:23.129 "base_bdevs_list": [ 00:16:23.129 { 00:16:23.129 "name": null, 00:16:23.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.129 "is_configured": false, 00:16:23.129 "data_offset": 0, 00:16:23.129 "data_size": 65536 00:16:23.129 }, 00:16:23.129 { 00:16:23.129 "name": "BaseBdev2", 00:16:23.129 "uuid": "3859b64e-7b6f-4512-926f-f323f895375a", 00:16:23.129 "is_configured": true, 00:16:23.129 "data_offset": 0, 00:16:23.129 "data_size": 65536 00:16:23.129 }, 00:16:23.129 { 00:16:23.129 "name": "BaseBdev3", 00:16:23.129 "uuid": "3f61b65b-bc29-4f6c-858e-6c96068eb2c7", 00:16:23.129 "is_configured": true, 00:16:23.129 "data_offset": 0, 00:16:23.129 "data_size": 65536 00:16:23.129 } 00:16:23.129 ] 00:16:23.129 }' 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.129 16:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.699 16:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:23.699 16:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:23.699 16:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.699 16:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:23.699 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.699 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.699 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.699 16:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:23.699 16:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:23.699 16:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:23.699 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.699 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.699 [2024-12-06 16:32:05.427371] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:23.699 [2024-12-06 16:32:05.427523] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:23.699 [2024-12-06 16:32:05.438956] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:23.699 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.699 16:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:23.699 16:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:23.699 16:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.699 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.699 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.699 16:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:23.699 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.699 16:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:23.699 16:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:23.699 16:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:23.699 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.699 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.699 [2024-12-06 16:32:05.502870] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:23.699 [2024-12-06 16:32:05.502920] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:16:23.699 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.699 16:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:23.699 16:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:23.699 16:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:23.699 16:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.699 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.699 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.699 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.960 BaseBdev2 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.960 [ 00:16:23.960 { 00:16:23.960 "name": "BaseBdev2", 00:16:23.960 "aliases": [ 00:16:23.960 "2ba321c4-a0d2-4566-88af-bda5bfa7d3d4" 00:16:23.960 ], 00:16:23.960 "product_name": "Malloc disk", 00:16:23.960 "block_size": 512, 00:16:23.960 "num_blocks": 65536, 00:16:23.960 "uuid": "2ba321c4-a0d2-4566-88af-bda5bfa7d3d4", 00:16:23.960 "assigned_rate_limits": { 00:16:23.960 "rw_ios_per_sec": 0, 00:16:23.960 "rw_mbytes_per_sec": 0, 00:16:23.960 "r_mbytes_per_sec": 0, 00:16:23.960 "w_mbytes_per_sec": 0 00:16:23.960 }, 00:16:23.960 "claimed": false, 00:16:23.960 "zoned": false, 00:16:23.960 "supported_io_types": { 00:16:23.960 "read": true, 00:16:23.960 "write": true, 00:16:23.960 "unmap": true, 00:16:23.960 "flush": true, 00:16:23.960 "reset": true, 00:16:23.960 "nvme_admin": false, 00:16:23.960 "nvme_io": false, 00:16:23.960 "nvme_io_md": false, 00:16:23.960 "write_zeroes": true, 00:16:23.960 "zcopy": true, 00:16:23.960 "get_zone_info": false, 00:16:23.960 "zone_management": false, 00:16:23.960 "zone_append": false, 00:16:23.960 "compare": false, 00:16:23.960 "compare_and_write": false, 00:16:23.960 "abort": true, 00:16:23.960 "seek_hole": false, 00:16:23.960 "seek_data": false, 00:16:23.960 "copy": true, 00:16:23.960 "nvme_iov_md": false 00:16:23.960 }, 00:16:23.960 "memory_domains": [ 00:16:23.960 { 00:16:23.960 "dma_device_id": "system", 00:16:23.960 "dma_device_type": 1 00:16:23.960 }, 00:16:23.960 { 00:16:23.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.960 "dma_device_type": 2 00:16:23.960 } 00:16:23.960 ], 00:16:23.960 "driver_specific": {} 00:16:23.960 } 00:16:23.960 ] 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.960 BaseBdev3 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.960 [ 00:16:23.960 { 00:16:23.960 "name": "BaseBdev3", 00:16:23.960 "aliases": [ 00:16:23.960 "26f8c289-9c51-4a1b-977a-743a81387a66" 00:16:23.960 ], 00:16:23.960 "product_name": "Malloc disk", 00:16:23.960 "block_size": 512, 00:16:23.960 "num_blocks": 65536, 00:16:23.960 "uuid": "26f8c289-9c51-4a1b-977a-743a81387a66", 00:16:23.960 "assigned_rate_limits": { 00:16:23.960 "rw_ios_per_sec": 0, 00:16:23.960 "rw_mbytes_per_sec": 0, 00:16:23.960 "r_mbytes_per_sec": 0, 00:16:23.960 "w_mbytes_per_sec": 0 00:16:23.960 }, 00:16:23.960 "claimed": false, 00:16:23.960 "zoned": false, 00:16:23.960 "supported_io_types": { 00:16:23.960 "read": true, 00:16:23.960 "write": true, 00:16:23.960 "unmap": true, 00:16:23.960 "flush": true, 00:16:23.960 "reset": true, 00:16:23.960 "nvme_admin": false, 00:16:23.960 "nvme_io": false, 00:16:23.960 "nvme_io_md": false, 00:16:23.960 "write_zeroes": true, 00:16:23.960 "zcopy": true, 00:16:23.960 "get_zone_info": false, 00:16:23.960 "zone_management": false, 00:16:23.960 "zone_append": false, 00:16:23.960 "compare": false, 00:16:23.960 "compare_and_write": false, 00:16:23.960 "abort": true, 00:16:23.960 "seek_hole": false, 00:16:23.960 "seek_data": false, 00:16:23.960 "copy": true, 00:16:23.960 "nvme_iov_md": false 00:16:23.960 }, 00:16:23.960 "memory_domains": [ 00:16:23.960 { 00:16:23.960 "dma_device_id": "system", 00:16:23.960 "dma_device_type": 1 00:16:23.960 }, 00:16:23.960 { 00:16:23.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.960 "dma_device_type": 2 00:16:23.960 } 00:16:23.960 ], 00:16:23.960 "driver_specific": {} 00:16:23.960 } 00:16:23.960 ] 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.960 [2024-12-06 16:32:05.670342] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:23.960 [2024-12-06 16:32:05.670423] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:23.960 [2024-12-06 16:32:05.670471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:23.960 [2024-12-06 16:32:05.672361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:23.960 16:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:23.961 16:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:23.961 16:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:23.961 16:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:23.961 16:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:23.961 16:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.961 16:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.961 16:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.961 16:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.961 16:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.961 16:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.961 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.961 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.961 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.961 16:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.961 "name": "Existed_Raid", 00:16:23.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.961 "strip_size_kb": 64, 00:16:23.961 "state": "configuring", 00:16:23.961 "raid_level": "raid5f", 00:16:23.961 "superblock": false, 00:16:23.961 "num_base_bdevs": 3, 00:16:23.961 "num_base_bdevs_discovered": 2, 00:16:23.961 "num_base_bdevs_operational": 3, 00:16:23.961 "base_bdevs_list": [ 00:16:23.961 { 00:16:23.961 "name": "BaseBdev1", 00:16:23.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.961 "is_configured": false, 00:16:23.961 "data_offset": 0, 00:16:23.961 "data_size": 0 00:16:23.961 }, 00:16:23.961 { 00:16:23.961 "name": "BaseBdev2", 00:16:23.961 "uuid": "2ba321c4-a0d2-4566-88af-bda5bfa7d3d4", 00:16:23.961 "is_configured": true, 00:16:23.961 "data_offset": 0, 00:16:23.961 "data_size": 65536 00:16:23.961 }, 00:16:23.961 { 00:16:23.961 "name": "BaseBdev3", 00:16:23.961 "uuid": "26f8c289-9c51-4a1b-977a-743a81387a66", 00:16:23.961 "is_configured": true, 00:16:23.961 "data_offset": 0, 00:16:23.961 "data_size": 65536 00:16:23.961 } 00:16:23.961 ] 00:16:23.961 }' 00:16:23.961 16:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.961 16:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.529 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:24.529 16:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.529 16:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.529 [2024-12-06 16:32:06.137574] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:24.529 16:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.529 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:24.529 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:24.529 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:24.529 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:24.529 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:24.529 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:24.529 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.529 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.529 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.529 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.529 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.529 16:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.529 16:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.529 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.529 16:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.529 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.529 "name": "Existed_Raid", 00:16:24.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.529 "strip_size_kb": 64, 00:16:24.529 "state": "configuring", 00:16:24.529 "raid_level": "raid5f", 00:16:24.529 "superblock": false, 00:16:24.529 "num_base_bdevs": 3, 00:16:24.529 "num_base_bdevs_discovered": 1, 00:16:24.529 "num_base_bdevs_operational": 3, 00:16:24.529 "base_bdevs_list": [ 00:16:24.529 { 00:16:24.529 "name": "BaseBdev1", 00:16:24.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.529 "is_configured": false, 00:16:24.529 "data_offset": 0, 00:16:24.529 "data_size": 0 00:16:24.529 }, 00:16:24.529 { 00:16:24.529 "name": null, 00:16:24.529 "uuid": "2ba321c4-a0d2-4566-88af-bda5bfa7d3d4", 00:16:24.529 "is_configured": false, 00:16:24.529 "data_offset": 0, 00:16:24.529 "data_size": 65536 00:16:24.529 }, 00:16:24.529 { 00:16:24.529 "name": "BaseBdev3", 00:16:24.529 "uuid": "26f8c289-9c51-4a1b-977a-743a81387a66", 00:16:24.529 "is_configured": true, 00:16:24.529 "data_offset": 0, 00:16:24.529 "data_size": 65536 00:16:24.529 } 00:16:24.529 ] 00:16:24.529 }' 00:16:24.529 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.529 16:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.788 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:24.788 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.788 16:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.788 16:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.046 16:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.046 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:25.046 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:25.046 16:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.046 16:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.046 [2024-12-06 16:32:06.659982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:25.046 BaseBdev1 00:16:25.046 16:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.046 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:25.046 16:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:25.046 16:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:25.046 16:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:25.046 16:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:25.046 16:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:25.046 16:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:25.046 16:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.046 16:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.046 16:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.046 16:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:25.046 16:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.046 16:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.046 [ 00:16:25.046 { 00:16:25.046 "name": "BaseBdev1", 00:16:25.046 "aliases": [ 00:16:25.046 "8e3af327-a59c-4bc8-b8cc-c916cac57032" 00:16:25.046 ], 00:16:25.046 "product_name": "Malloc disk", 00:16:25.046 "block_size": 512, 00:16:25.046 "num_blocks": 65536, 00:16:25.046 "uuid": "8e3af327-a59c-4bc8-b8cc-c916cac57032", 00:16:25.046 "assigned_rate_limits": { 00:16:25.046 "rw_ios_per_sec": 0, 00:16:25.046 "rw_mbytes_per_sec": 0, 00:16:25.046 "r_mbytes_per_sec": 0, 00:16:25.046 "w_mbytes_per_sec": 0 00:16:25.046 }, 00:16:25.046 "claimed": true, 00:16:25.046 "claim_type": "exclusive_write", 00:16:25.046 "zoned": false, 00:16:25.046 "supported_io_types": { 00:16:25.046 "read": true, 00:16:25.046 "write": true, 00:16:25.046 "unmap": true, 00:16:25.046 "flush": true, 00:16:25.046 "reset": true, 00:16:25.046 "nvme_admin": false, 00:16:25.046 "nvme_io": false, 00:16:25.046 "nvme_io_md": false, 00:16:25.046 "write_zeroes": true, 00:16:25.046 "zcopy": true, 00:16:25.046 "get_zone_info": false, 00:16:25.046 "zone_management": false, 00:16:25.046 "zone_append": false, 00:16:25.046 "compare": false, 00:16:25.046 "compare_and_write": false, 00:16:25.046 "abort": true, 00:16:25.046 "seek_hole": false, 00:16:25.046 "seek_data": false, 00:16:25.046 "copy": true, 00:16:25.046 "nvme_iov_md": false 00:16:25.046 }, 00:16:25.046 "memory_domains": [ 00:16:25.046 { 00:16:25.046 "dma_device_id": "system", 00:16:25.046 "dma_device_type": 1 00:16:25.046 }, 00:16:25.046 { 00:16:25.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.046 "dma_device_type": 2 00:16:25.046 } 00:16:25.046 ], 00:16:25.046 "driver_specific": {} 00:16:25.046 } 00:16:25.046 ] 00:16:25.046 16:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.046 16:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:25.047 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:25.047 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:25.047 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:25.047 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:25.047 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.047 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:25.047 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.047 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.047 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.047 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.047 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.047 16:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.047 16:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.047 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.047 16:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.047 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.047 "name": "Existed_Raid", 00:16:25.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.047 "strip_size_kb": 64, 00:16:25.047 "state": "configuring", 00:16:25.047 "raid_level": "raid5f", 00:16:25.047 "superblock": false, 00:16:25.047 "num_base_bdevs": 3, 00:16:25.047 "num_base_bdevs_discovered": 2, 00:16:25.047 "num_base_bdevs_operational": 3, 00:16:25.047 "base_bdevs_list": [ 00:16:25.047 { 00:16:25.047 "name": "BaseBdev1", 00:16:25.047 "uuid": "8e3af327-a59c-4bc8-b8cc-c916cac57032", 00:16:25.047 "is_configured": true, 00:16:25.047 "data_offset": 0, 00:16:25.047 "data_size": 65536 00:16:25.047 }, 00:16:25.047 { 00:16:25.047 "name": null, 00:16:25.047 "uuid": "2ba321c4-a0d2-4566-88af-bda5bfa7d3d4", 00:16:25.047 "is_configured": false, 00:16:25.047 "data_offset": 0, 00:16:25.047 "data_size": 65536 00:16:25.047 }, 00:16:25.047 { 00:16:25.047 "name": "BaseBdev3", 00:16:25.047 "uuid": "26f8c289-9c51-4a1b-977a-743a81387a66", 00:16:25.047 "is_configured": true, 00:16:25.047 "data_offset": 0, 00:16:25.047 "data_size": 65536 00:16:25.047 } 00:16:25.047 ] 00:16:25.047 }' 00:16:25.047 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.047 16:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.613 16:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:25.613 16:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.613 16:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.613 16:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.613 16:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.613 16:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:25.613 16:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:25.613 16:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.613 16:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.613 [2024-12-06 16:32:07.183294] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:25.613 16:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.613 16:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:25.613 16:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:25.613 16:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:25.613 16:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:25.613 16:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.613 16:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:25.613 16:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.613 16:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.613 16:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.613 16:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.613 16:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.613 16:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.613 16:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.613 16:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.613 16:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.613 16:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.613 "name": "Existed_Raid", 00:16:25.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.613 "strip_size_kb": 64, 00:16:25.613 "state": "configuring", 00:16:25.613 "raid_level": "raid5f", 00:16:25.613 "superblock": false, 00:16:25.613 "num_base_bdevs": 3, 00:16:25.613 "num_base_bdevs_discovered": 1, 00:16:25.613 "num_base_bdevs_operational": 3, 00:16:25.613 "base_bdevs_list": [ 00:16:25.613 { 00:16:25.613 "name": "BaseBdev1", 00:16:25.613 "uuid": "8e3af327-a59c-4bc8-b8cc-c916cac57032", 00:16:25.613 "is_configured": true, 00:16:25.613 "data_offset": 0, 00:16:25.613 "data_size": 65536 00:16:25.613 }, 00:16:25.613 { 00:16:25.613 "name": null, 00:16:25.613 "uuid": "2ba321c4-a0d2-4566-88af-bda5bfa7d3d4", 00:16:25.613 "is_configured": false, 00:16:25.613 "data_offset": 0, 00:16:25.613 "data_size": 65536 00:16:25.613 }, 00:16:25.613 { 00:16:25.613 "name": null, 00:16:25.613 "uuid": "26f8c289-9c51-4a1b-977a-743a81387a66", 00:16:25.614 "is_configured": false, 00:16:25.614 "data_offset": 0, 00:16:25.614 "data_size": 65536 00:16:25.614 } 00:16:25.614 ] 00:16:25.614 }' 00:16:25.614 16:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.614 16:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.873 16:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:25.873 16:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.873 16:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.873 16:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.873 16:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.873 16:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:25.873 16:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:25.873 16:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.873 16:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.873 [2024-12-06 16:32:07.650479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:25.873 16:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.873 16:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:25.873 16:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:25.873 16:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:25.873 16:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:25.874 16:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.874 16:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:25.874 16:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.874 16:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.874 16:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.874 16:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.874 16:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.874 16:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.874 16:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.874 16:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.874 16:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.874 16:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.874 "name": "Existed_Raid", 00:16:25.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.874 "strip_size_kb": 64, 00:16:25.874 "state": "configuring", 00:16:25.874 "raid_level": "raid5f", 00:16:25.874 "superblock": false, 00:16:25.874 "num_base_bdevs": 3, 00:16:25.874 "num_base_bdevs_discovered": 2, 00:16:25.874 "num_base_bdevs_operational": 3, 00:16:25.874 "base_bdevs_list": [ 00:16:25.874 { 00:16:25.874 "name": "BaseBdev1", 00:16:25.874 "uuid": "8e3af327-a59c-4bc8-b8cc-c916cac57032", 00:16:25.874 "is_configured": true, 00:16:25.874 "data_offset": 0, 00:16:25.874 "data_size": 65536 00:16:25.874 }, 00:16:25.874 { 00:16:25.874 "name": null, 00:16:25.874 "uuid": "2ba321c4-a0d2-4566-88af-bda5bfa7d3d4", 00:16:25.874 "is_configured": false, 00:16:25.874 "data_offset": 0, 00:16:25.874 "data_size": 65536 00:16:25.874 }, 00:16:25.874 { 00:16:25.874 "name": "BaseBdev3", 00:16:25.874 "uuid": "26f8c289-9c51-4a1b-977a-743a81387a66", 00:16:25.874 "is_configured": true, 00:16:25.874 "data_offset": 0, 00:16:25.874 "data_size": 65536 00:16:25.874 } 00:16:25.874 ] 00:16:25.874 }' 00:16:25.874 16:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.874 16:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.441 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:26.441 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.441 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.441 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.441 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.441 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:26.441 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:26.441 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.441 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.441 [2024-12-06 16:32:08.137757] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:26.441 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.441 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:26.441 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:26.441 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:26.441 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:26.441 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:26.441 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:26.441 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.441 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.441 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.441 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.441 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.441 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.441 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.441 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.441 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.441 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.441 "name": "Existed_Raid", 00:16:26.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.441 "strip_size_kb": 64, 00:16:26.441 "state": "configuring", 00:16:26.441 "raid_level": "raid5f", 00:16:26.441 "superblock": false, 00:16:26.441 "num_base_bdevs": 3, 00:16:26.441 "num_base_bdevs_discovered": 1, 00:16:26.441 "num_base_bdevs_operational": 3, 00:16:26.441 "base_bdevs_list": [ 00:16:26.441 { 00:16:26.441 "name": null, 00:16:26.441 "uuid": "8e3af327-a59c-4bc8-b8cc-c916cac57032", 00:16:26.441 "is_configured": false, 00:16:26.441 "data_offset": 0, 00:16:26.441 "data_size": 65536 00:16:26.441 }, 00:16:26.441 { 00:16:26.441 "name": null, 00:16:26.441 "uuid": "2ba321c4-a0d2-4566-88af-bda5bfa7d3d4", 00:16:26.441 "is_configured": false, 00:16:26.441 "data_offset": 0, 00:16:26.441 "data_size": 65536 00:16:26.441 }, 00:16:26.441 { 00:16:26.441 "name": "BaseBdev3", 00:16:26.441 "uuid": "26f8c289-9c51-4a1b-977a-743a81387a66", 00:16:26.441 "is_configured": true, 00:16:26.441 "data_offset": 0, 00:16:26.441 "data_size": 65536 00:16:26.441 } 00:16:26.441 ] 00:16:26.441 }' 00:16:26.441 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.442 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.009 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:27.009 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.009 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.009 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.009 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.009 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:27.009 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:27.009 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.009 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.009 [2024-12-06 16:32:08.611900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:27.009 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.009 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:27.009 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:27.009 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:27.009 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:27.009 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:27.009 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:27.009 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.009 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.009 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.009 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.009 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.009 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.009 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.009 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.009 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.009 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.009 "name": "Existed_Raid", 00:16:27.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.009 "strip_size_kb": 64, 00:16:27.009 "state": "configuring", 00:16:27.009 "raid_level": "raid5f", 00:16:27.009 "superblock": false, 00:16:27.009 "num_base_bdevs": 3, 00:16:27.009 "num_base_bdevs_discovered": 2, 00:16:27.009 "num_base_bdevs_operational": 3, 00:16:27.009 "base_bdevs_list": [ 00:16:27.009 { 00:16:27.009 "name": null, 00:16:27.009 "uuid": "8e3af327-a59c-4bc8-b8cc-c916cac57032", 00:16:27.009 "is_configured": false, 00:16:27.009 "data_offset": 0, 00:16:27.009 "data_size": 65536 00:16:27.009 }, 00:16:27.009 { 00:16:27.009 "name": "BaseBdev2", 00:16:27.009 "uuid": "2ba321c4-a0d2-4566-88af-bda5bfa7d3d4", 00:16:27.009 "is_configured": true, 00:16:27.009 "data_offset": 0, 00:16:27.009 "data_size": 65536 00:16:27.009 }, 00:16:27.009 { 00:16:27.009 "name": "BaseBdev3", 00:16:27.009 "uuid": "26f8c289-9c51-4a1b-977a-743a81387a66", 00:16:27.009 "is_configured": true, 00:16:27.009 "data_offset": 0, 00:16:27.009 "data_size": 65536 00:16:27.009 } 00:16:27.009 ] 00:16:27.009 }' 00:16:27.009 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.009 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.268 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:27.268 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.268 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.268 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.268 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.268 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:27.268 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.268 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.268 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.268 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:27.268 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.268 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8e3af327-a59c-4bc8-b8cc-c916cac57032 00:16:27.268 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.268 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.268 [2024-12-06 16:32:09.090426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:27.268 [2024-12-06 16:32:09.090476] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:16:27.268 [2024-12-06 16:32:09.090488] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:27.268 [2024-12-06 16:32:09.090774] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:27.268 [2024-12-06 16:32:09.091283] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:16:27.268 [2024-12-06 16:32:09.091300] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:16:27.268 NewBaseBdev 00:16:27.268 [2024-12-06 16:32:09.091515] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:27.268 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.268 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:27.268 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:27.268 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:27.268 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:27.268 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:27.268 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:27.268 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:27.268 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.268 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.268 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.268 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:27.268 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.268 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.526 [ 00:16:27.526 { 00:16:27.527 "name": "NewBaseBdev", 00:16:27.527 "aliases": [ 00:16:27.527 "8e3af327-a59c-4bc8-b8cc-c916cac57032" 00:16:27.527 ], 00:16:27.527 "product_name": "Malloc disk", 00:16:27.527 "block_size": 512, 00:16:27.527 "num_blocks": 65536, 00:16:27.527 "uuid": "8e3af327-a59c-4bc8-b8cc-c916cac57032", 00:16:27.527 "assigned_rate_limits": { 00:16:27.527 "rw_ios_per_sec": 0, 00:16:27.527 "rw_mbytes_per_sec": 0, 00:16:27.527 "r_mbytes_per_sec": 0, 00:16:27.527 "w_mbytes_per_sec": 0 00:16:27.527 }, 00:16:27.527 "claimed": true, 00:16:27.527 "claim_type": "exclusive_write", 00:16:27.527 "zoned": false, 00:16:27.527 "supported_io_types": { 00:16:27.527 "read": true, 00:16:27.527 "write": true, 00:16:27.527 "unmap": true, 00:16:27.527 "flush": true, 00:16:27.527 "reset": true, 00:16:27.527 "nvme_admin": false, 00:16:27.527 "nvme_io": false, 00:16:27.527 "nvme_io_md": false, 00:16:27.527 "write_zeroes": true, 00:16:27.527 "zcopy": true, 00:16:27.527 "get_zone_info": false, 00:16:27.527 "zone_management": false, 00:16:27.527 "zone_append": false, 00:16:27.527 "compare": false, 00:16:27.527 "compare_and_write": false, 00:16:27.527 "abort": true, 00:16:27.527 "seek_hole": false, 00:16:27.527 "seek_data": false, 00:16:27.527 "copy": true, 00:16:27.527 "nvme_iov_md": false 00:16:27.527 }, 00:16:27.527 "memory_domains": [ 00:16:27.527 { 00:16:27.527 "dma_device_id": "system", 00:16:27.527 "dma_device_type": 1 00:16:27.527 }, 00:16:27.527 { 00:16:27.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.527 "dma_device_type": 2 00:16:27.527 } 00:16:27.527 ], 00:16:27.527 "driver_specific": {} 00:16:27.527 } 00:16:27.527 ] 00:16:27.527 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.527 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:27.527 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:27.527 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:27.527 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:27.527 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:27.527 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:27.527 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:27.527 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.527 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.527 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.527 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.527 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.527 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.527 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.527 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.527 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.527 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.527 "name": "Existed_Raid", 00:16:27.527 "uuid": "555b7772-5bce-40ed-9369-fb055688bc3d", 00:16:27.527 "strip_size_kb": 64, 00:16:27.527 "state": "online", 00:16:27.527 "raid_level": "raid5f", 00:16:27.527 "superblock": false, 00:16:27.527 "num_base_bdevs": 3, 00:16:27.527 "num_base_bdevs_discovered": 3, 00:16:27.527 "num_base_bdevs_operational": 3, 00:16:27.527 "base_bdevs_list": [ 00:16:27.527 { 00:16:27.527 "name": "NewBaseBdev", 00:16:27.527 "uuid": "8e3af327-a59c-4bc8-b8cc-c916cac57032", 00:16:27.527 "is_configured": true, 00:16:27.527 "data_offset": 0, 00:16:27.527 "data_size": 65536 00:16:27.527 }, 00:16:27.527 { 00:16:27.527 "name": "BaseBdev2", 00:16:27.527 "uuid": "2ba321c4-a0d2-4566-88af-bda5bfa7d3d4", 00:16:27.527 "is_configured": true, 00:16:27.527 "data_offset": 0, 00:16:27.527 "data_size": 65536 00:16:27.527 }, 00:16:27.527 { 00:16:27.527 "name": "BaseBdev3", 00:16:27.527 "uuid": "26f8c289-9c51-4a1b-977a-743a81387a66", 00:16:27.527 "is_configured": true, 00:16:27.527 "data_offset": 0, 00:16:27.527 "data_size": 65536 00:16:27.527 } 00:16:27.527 ] 00:16:27.527 }' 00:16:27.527 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.527 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.786 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:27.786 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:27.786 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:27.786 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:27.786 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:27.786 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:27.786 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:27.786 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:27.786 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.786 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.786 [2024-12-06 16:32:09.510074] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:27.786 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.786 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:27.786 "name": "Existed_Raid", 00:16:27.786 "aliases": [ 00:16:27.786 "555b7772-5bce-40ed-9369-fb055688bc3d" 00:16:27.786 ], 00:16:27.786 "product_name": "Raid Volume", 00:16:27.786 "block_size": 512, 00:16:27.786 "num_blocks": 131072, 00:16:27.786 "uuid": "555b7772-5bce-40ed-9369-fb055688bc3d", 00:16:27.786 "assigned_rate_limits": { 00:16:27.786 "rw_ios_per_sec": 0, 00:16:27.786 "rw_mbytes_per_sec": 0, 00:16:27.786 "r_mbytes_per_sec": 0, 00:16:27.786 "w_mbytes_per_sec": 0 00:16:27.786 }, 00:16:27.786 "claimed": false, 00:16:27.786 "zoned": false, 00:16:27.786 "supported_io_types": { 00:16:27.786 "read": true, 00:16:27.786 "write": true, 00:16:27.786 "unmap": false, 00:16:27.786 "flush": false, 00:16:27.786 "reset": true, 00:16:27.786 "nvme_admin": false, 00:16:27.786 "nvme_io": false, 00:16:27.786 "nvme_io_md": false, 00:16:27.786 "write_zeroes": true, 00:16:27.786 "zcopy": false, 00:16:27.786 "get_zone_info": false, 00:16:27.786 "zone_management": false, 00:16:27.786 "zone_append": false, 00:16:27.786 "compare": false, 00:16:27.786 "compare_and_write": false, 00:16:27.786 "abort": false, 00:16:27.786 "seek_hole": false, 00:16:27.786 "seek_data": false, 00:16:27.786 "copy": false, 00:16:27.786 "nvme_iov_md": false 00:16:27.786 }, 00:16:27.786 "driver_specific": { 00:16:27.786 "raid": { 00:16:27.786 "uuid": "555b7772-5bce-40ed-9369-fb055688bc3d", 00:16:27.786 "strip_size_kb": 64, 00:16:27.786 "state": "online", 00:16:27.786 "raid_level": "raid5f", 00:16:27.786 "superblock": false, 00:16:27.786 "num_base_bdevs": 3, 00:16:27.786 "num_base_bdevs_discovered": 3, 00:16:27.786 "num_base_bdevs_operational": 3, 00:16:27.786 "base_bdevs_list": [ 00:16:27.786 { 00:16:27.786 "name": "NewBaseBdev", 00:16:27.786 "uuid": "8e3af327-a59c-4bc8-b8cc-c916cac57032", 00:16:27.786 "is_configured": true, 00:16:27.786 "data_offset": 0, 00:16:27.786 "data_size": 65536 00:16:27.786 }, 00:16:27.786 { 00:16:27.786 "name": "BaseBdev2", 00:16:27.786 "uuid": "2ba321c4-a0d2-4566-88af-bda5bfa7d3d4", 00:16:27.786 "is_configured": true, 00:16:27.786 "data_offset": 0, 00:16:27.786 "data_size": 65536 00:16:27.786 }, 00:16:27.786 { 00:16:27.786 "name": "BaseBdev3", 00:16:27.786 "uuid": "26f8c289-9c51-4a1b-977a-743a81387a66", 00:16:27.786 "is_configured": true, 00:16:27.786 "data_offset": 0, 00:16:27.786 "data_size": 65536 00:16:27.786 } 00:16:27.786 ] 00:16:27.786 } 00:16:27.786 } 00:16:27.786 }' 00:16:27.786 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:27.786 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:27.787 BaseBdev2 00:16:27.787 BaseBdev3' 00:16:27.787 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:28.045 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:28.045 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:28.045 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:28.045 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:28.045 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.045 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.045 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.045 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:28.045 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:28.045 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:28.045 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:28.045 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:28.045 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.045 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.045 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.045 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:28.045 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:28.045 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:28.045 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:28.045 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.045 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.045 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:28.045 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.045 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:28.045 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:28.045 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:28.046 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.046 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.046 [2024-12-06 16:32:09.785345] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:28.046 [2024-12-06 16:32:09.785394] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:28.046 [2024-12-06 16:32:09.785485] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:28.046 [2024-12-06 16:32:09.785788] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:28.046 [2024-12-06 16:32:09.785810] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:16:28.046 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.046 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 90944 00:16:28.046 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 90944 ']' 00:16:28.046 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 90944 00:16:28.046 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:16:28.046 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:28.046 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90944 00:16:28.046 killing process with pid 90944 00:16:28.046 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:28.046 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:28.046 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90944' 00:16:28.046 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 90944 00:16:28.046 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 90944 00:16:28.046 [2024-12-06 16:32:09.835084] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:28.046 [2024-12-06 16:32:09.867994] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:28.305 ************************************ 00:16:28.305 END TEST raid5f_state_function_test 00:16:28.305 ************************************ 00:16:28.305 16:32:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:28.305 00:16:28.305 real 0m8.875s 00:16:28.305 user 0m15.069s 00:16:28.305 sys 0m1.937s 00:16:28.305 16:32:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:28.305 16:32:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.305 16:32:10 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:16:28.305 16:32:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:28.305 16:32:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:28.305 16:32:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:28.565 ************************************ 00:16:28.565 START TEST raid5f_state_function_test_sb 00:16:28.565 ************************************ 00:16:28.565 16:32:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:16:28.565 16:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:28.565 16:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:16:28.565 16:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:28.565 16:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:28.565 16:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:28.565 16:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:28.565 16:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:28.565 16:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:28.565 16:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:28.565 16:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:28.565 16:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:28.565 16:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:28.565 16:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:28.565 16:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:28.565 16:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:28.565 16:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:28.565 16:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:28.565 16:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:28.565 16:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:28.565 16:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:28.565 16:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:28.565 16:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:28.565 16:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:28.565 16:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:28.565 16:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:28.565 16:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:28.565 Process raid pid: 91545 00:16:28.565 16:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=91545 00:16:28.565 16:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:28.565 16:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 91545' 00:16:28.565 16:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 91545 00:16:28.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:28.565 16:32:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 91545 ']' 00:16:28.565 16:32:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:28.565 16:32:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:28.565 16:32:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:28.565 16:32:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:28.565 16:32:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.565 [2024-12-06 16:32:10.250240] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:16:28.565 [2024-12-06 16:32:10.250361] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:28.825 [2024-12-06 16:32:10.423322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.825 [2024-12-06 16:32:10.449834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.825 [2024-12-06 16:32:10.491191] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:28.825 [2024-12-06 16:32:10.491239] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:29.394 16:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:29.394 16:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:29.394 16:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:29.394 16:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.394 16:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.394 [2024-12-06 16:32:11.109346] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:29.394 [2024-12-06 16:32:11.109405] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:29.394 [2024-12-06 16:32:11.109415] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:29.394 [2024-12-06 16:32:11.109424] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:29.394 [2024-12-06 16:32:11.109433] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:29.394 [2024-12-06 16:32:11.109444] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:29.394 16:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.394 16:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:29.394 16:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:29.394 16:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:29.394 16:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:29.394 16:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.394 16:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:29.394 16:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.394 16:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.394 16:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.394 16:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.394 16:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.394 16:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.394 16:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.394 16:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.394 16:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.394 16:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.394 "name": "Existed_Raid", 00:16:29.394 "uuid": "94345c28-bf68-4a70-b9b5-b16ca223394d", 00:16:29.394 "strip_size_kb": 64, 00:16:29.394 "state": "configuring", 00:16:29.394 "raid_level": "raid5f", 00:16:29.394 "superblock": true, 00:16:29.394 "num_base_bdevs": 3, 00:16:29.394 "num_base_bdevs_discovered": 0, 00:16:29.394 "num_base_bdevs_operational": 3, 00:16:29.394 "base_bdevs_list": [ 00:16:29.394 { 00:16:29.394 "name": "BaseBdev1", 00:16:29.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.394 "is_configured": false, 00:16:29.394 "data_offset": 0, 00:16:29.394 "data_size": 0 00:16:29.394 }, 00:16:29.394 { 00:16:29.394 "name": "BaseBdev2", 00:16:29.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.394 "is_configured": false, 00:16:29.394 "data_offset": 0, 00:16:29.394 "data_size": 0 00:16:29.394 }, 00:16:29.394 { 00:16:29.394 "name": "BaseBdev3", 00:16:29.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.394 "is_configured": false, 00:16:29.394 "data_offset": 0, 00:16:29.394 "data_size": 0 00:16:29.394 } 00:16:29.394 ] 00:16:29.394 }' 00:16:29.394 16:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.394 16:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.963 16:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:29.963 16:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.963 16:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.963 [2024-12-06 16:32:11.584405] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:29.963 [2024-12-06 16:32:11.584491] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:16:29.963 16:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.963 16:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:29.963 16:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.963 16:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.963 [2024-12-06 16:32:11.596398] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:29.963 [2024-12-06 16:32:11.596475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:29.963 [2024-12-06 16:32:11.596501] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:29.963 [2024-12-06 16:32:11.596525] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:29.963 [2024-12-06 16:32:11.596543] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:29.963 [2024-12-06 16:32:11.596563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:29.963 16:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.963 16:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:29.963 16:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.963 16:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.963 [2024-12-06 16:32:11.616993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:29.963 BaseBdev1 00:16:29.963 16:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.963 16:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:29.963 16:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:29.963 16:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:29.963 16:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:29.963 16:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:29.963 16:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:29.964 16:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:29.964 16:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.964 16:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.964 16:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.964 16:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:29.964 16:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.964 16:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.964 [ 00:16:29.964 { 00:16:29.964 "name": "BaseBdev1", 00:16:29.964 "aliases": [ 00:16:29.964 "a526025b-9199-435e-bd17-0cec302a2aed" 00:16:29.964 ], 00:16:29.964 "product_name": "Malloc disk", 00:16:29.964 "block_size": 512, 00:16:29.964 "num_blocks": 65536, 00:16:29.964 "uuid": "a526025b-9199-435e-bd17-0cec302a2aed", 00:16:29.964 "assigned_rate_limits": { 00:16:29.964 "rw_ios_per_sec": 0, 00:16:29.964 "rw_mbytes_per_sec": 0, 00:16:29.964 "r_mbytes_per_sec": 0, 00:16:29.964 "w_mbytes_per_sec": 0 00:16:29.964 }, 00:16:29.964 "claimed": true, 00:16:29.964 "claim_type": "exclusive_write", 00:16:29.964 "zoned": false, 00:16:29.964 "supported_io_types": { 00:16:29.964 "read": true, 00:16:29.964 "write": true, 00:16:29.964 "unmap": true, 00:16:29.964 "flush": true, 00:16:29.964 "reset": true, 00:16:29.964 "nvme_admin": false, 00:16:29.964 "nvme_io": false, 00:16:29.964 "nvme_io_md": false, 00:16:29.964 "write_zeroes": true, 00:16:29.964 "zcopy": true, 00:16:29.964 "get_zone_info": false, 00:16:29.964 "zone_management": false, 00:16:29.964 "zone_append": false, 00:16:29.964 "compare": false, 00:16:29.964 "compare_and_write": false, 00:16:29.964 "abort": true, 00:16:29.964 "seek_hole": false, 00:16:29.964 "seek_data": false, 00:16:29.964 "copy": true, 00:16:29.964 "nvme_iov_md": false 00:16:29.964 }, 00:16:29.964 "memory_domains": [ 00:16:29.964 { 00:16:29.964 "dma_device_id": "system", 00:16:29.964 "dma_device_type": 1 00:16:29.964 }, 00:16:29.964 { 00:16:29.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.964 "dma_device_type": 2 00:16:29.964 } 00:16:29.964 ], 00:16:29.964 "driver_specific": {} 00:16:29.964 } 00:16:29.964 ] 00:16:29.964 16:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.964 16:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:29.964 16:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:29.964 16:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:29.964 16:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:29.964 16:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:29.964 16:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.964 16:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:29.964 16:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.964 16:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.964 16:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.964 16:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.964 16:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.964 16:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.964 16:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.964 16:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.964 16:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.964 16:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.964 "name": "Existed_Raid", 00:16:29.964 "uuid": "b0f4f7e8-812b-4a8a-9908-5944bd35d12b", 00:16:29.964 "strip_size_kb": 64, 00:16:29.964 "state": "configuring", 00:16:29.964 "raid_level": "raid5f", 00:16:29.964 "superblock": true, 00:16:29.964 "num_base_bdevs": 3, 00:16:29.964 "num_base_bdevs_discovered": 1, 00:16:29.964 "num_base_bdevs_operational": 3, 00:16:29.964 "base_bdevs_list": [ 00:16:29.964 { 00:16:29.964 "name": "BaseBdev1", 00:16:29.964 "uuid": "a526025b-9199-435e-bd17-0cec302a2aed", 00:16:29.964 "is_configured": true, 00:16:29.964 "data_offset": 2048, 00:16:29.964 "data_size": 63488 00:16:29.964 }, 00:16:29.964 { 00:16:29.964 "name": "BaseBdev2", 00:16:29.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.964 "is_configured": false, 00:16:29.964 "data_offset": 0, 00:16:29.964 "data_size": 0 00:16:29.964 }, 00:16:29.964 { 00:16:29.964 "name": "BaseBdev3", 00:16:29.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.964 "is_configured": false, 00:16:29.964 "data_offset": 0, 00:16:29.964 "data_size": 0 00:16:29.964 } 00:16:29.964 ] 00:16:29.964 }' 00:16:29.964 16:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.964 16:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.606 16:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:30.606 16:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.606 16:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.606 [2024-12-06 16:32:12.108266] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:30.606 [2024-12-06 16:32:12.108319] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:16:30.606 16:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.606 16:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:30.606 16:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.606 16:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.606 [2024-12-06 16:32:12.120289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:30.606 [2024-12-06 16:32:12.122295] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:30.606 [2024-12-06 16:32:12.122370] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:30.606 [2024-12-06 16:32:12.122399] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:30.606 [2024-12-06 16:32:12.122425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:30.606 16:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.606 16:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:30.606 16:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:30.606 16:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:30.606 16:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:30.606 16:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:30.606 16:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:30.606 16:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:30.606 16:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:30.606 16:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.606 16:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.606 16:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.606 16:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.606 16:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.606 16:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.606 16:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.606 16:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.606 16:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.606 16:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.606 "name": "Existed_Raid", 00:16:30.606 "uuid": "c99ddf98-3420-44ab-8a0f-80fb4e12a668", 00:16:30.606 "strip_size_kb": 64, 00:16:30.606 "state": "configuring", 00:16:30.606 "raid_level": "raid5f", 00:16:30.606 "superblock": true, 00:16:30.606 "num_base_bdevs": 3, 00:16:30.606 "num_base_bdevs_discovered": 1, 00:16:30.606 "num_base_bdevs_operational": 3, 00:16:30.606 "base_bdevs_list": [ 00:16:30.606 { 00:16:30.606 "name": "BaseBdev1", 00:16:30.606 "uuid": "a526025b-9199-435e-bd17-0cec302a2aed", 00:16:30.606 "is_configured": true, 00:16:30.606 "data_offset": 2048, 00:16:30.606 "data_size": 63488 00:16:30.606 }, 00:16:30.606 { 00:16:30.607 "name": "BaseBdev2", 00:16:30.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.607 "is_configured": false, 00:16:30.607 "data_offset": 0, 00:16:30.607 "data_size": 0 00:16:30.607 }, 00:16:30.607 { 00:16:30.607 "name": "BaseBdev3", 00:16:30.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.607 "is_configured": false, 00:16:30.607 "data_offset": 0, 00:16:30.607 "data_size": 0 00:16:30.607 } 00:16:30.607 ] 00:16:30.607 }' 00:16:30.607 16:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.607 16:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.866 16:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:30.866 16:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.866 16:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.866 [2024-12-06 16:32:12.586278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:30.866 BaseBdev2 00:16:30.866 16:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.866 16:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:30.866 16:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:30.866 16:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:30.866 16:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:30.866 16:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:30.866 16:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:30.866 16:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:30.866 16:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.866 16:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.866 16:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.866 16:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:30.866 16:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.866 16:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.866 [ 00:16:30.866 { 00:16:30.866 "name": "BaseBdev2", 00:16:30.867 "aliases": [ 00:16:30.867 "06384227-8df2-4d80-a666-5fd05c13cd70" 00:16:30.867 ], 00:16:30.867 "product_name": "Malloc disk", 00:16:30.867 "block_size": 512, 00:16:30.867 "num_blocks": 65536, 00:16:30.867 "uuid": "06384227-8df2-4d80-a666-5fd05c13cd70", 00:16:30.867 "assigned_rate_limits": { 00:16:30.867 "rw_ios_per_sec": 0, 00:16:30.867 "rw_mbytes_per_sec": 0, 00:16:30.867 "r_mbytes_per_sec": 0, 00:16:30.867 "w_mbytes_per_sec": 0 00:16:30.867 }, 00:16:30.867 "claimed": true, 00:16:30.867 "claim_type": "exclusive_write", 00:16:30.867 "zoned": false, 00:16:30.867 "supported_io_types": { 00:16:30.867 "read": true, 00:16:30.867 "write": true, 00:16:30.867 "unmap": true, 00:16:30.867 "flush": true, 00:16:30.867 "reset": true, 00:16:30.867 "nvme_admin": false, 00:16:30.867 "nvme_io": false, 00:16:30.867 "nvme_io_md": false, 00:16:30.867 "write_zeroes": true, 00:16:30.867 "zcopy": true, 00:16:30.867 "get_zone_info": false, 00:16:30.867 "zone_management": false, 00:16:30.867 "zone_append": false, 00:16:30.867 "compare": false, 00:16:30.867 "compare_and_write": false, 00:16:30.867 "abort": true, 00:16:30.867 "seek_hole": false, 00:16:30.867 "seek_data": false, 00:16:30.867 "copy": true, 00:16:30.867 "nvme_iov_md": false 00:16:30.867 }, 00:16:30.867 "memory_domains": [ 00:16:30.867 { 00:16:30.867 "dma_device_id": "system", 00:16:30.867 "dma_device_type": 1 00:16:30.867 }, 00:16:30.867 { 00:16:30.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.867 "dma_device_type": 2 00:16:30.867 } 00:16:30.867 ], 00:16:30.867 "driver_specific": {} 00:16:30.867 } 00:16:30.867 ] 00:16:30.867 16:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.867 16:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:30.867 16:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:30.867 16:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:30.867 16:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:30.867 16:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:30.867 16:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:30.867 16:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:30.867 16:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:30.867 16:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:30.867 16:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.867 16:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.867 16:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.867 16:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.867 16:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.867 16:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.867 16:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.867 16:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.867 16:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.867 16:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.867 "name": "Existed_Raid", 00:16:30.867 "uuid": "c99ddf98-3420-44ab-8a0f-80fb4e12a668", 00:16:30.867 "strip_size_kb": 64, 00:16:30.867 "state": "configuring", 00:16:30.867 "raid_level": "raid5f", 00:16:30.867 "superblock": true, 00:16:30.867 "num_base_bdevs": 3, 00:16:30.867 "num_base_bdevs_discovered": 2, 00:16:30.867 "num_base_bdevs_operational": 3, 00:16:30.867 "base_bdevs_list": [ 00:16:30.867 { 00:16:30.867 "name": "BaseBdev1", 00:16:30.867 "uuid": "a526025b-9199-435e-bd17-0cec302a2aed", 00:16:30.867 "is_configured": true, 00:16:30.867 "data_offset": 2048, 00:16:30.867 "data_size": 63488 00:16:30.867 }, 00:16:30.867 { 00:16:30.867 "name": "BaseBdev2", 00:16:30.867 "uuid": "06384227-8df2-4d80-a666-5fd05c13cd70", 00:16:30.867 "is_configured": true, 00:16:30.867 "data_offset": 2048, 00:16:30.867 "data_size": 63488 00:16:30.867 }, 00:16:30.867 { 00:16:30.867 "name": "BaseBdev3", 00:16:30.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.867 "is_configured": false, 00:16:30.867 "data_offset": 0, 00:16:30.867 "data_size": 0 00:16:30.867 } 00:16:30.867 ] 00:16:30.867 }' 00:16:30.867 16:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.867 16:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.436 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:31.437 16:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.437 16:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.437 [2024-12-06 16:32:13.062252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:31.437 [2024-12-06 16:32:13.062592] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:16:31.437 [2024-12-06 16:32:13.062666] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:31.437 [2024-12-06 16:32:13.063018] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:31.437 BaseBdev3 00:16:31.437 [2024-12-06 16:32:13.063602] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:16:31.437 [2024-12-06 16:32:13.063665] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:16:31.437 [2024-12-06 16:32:13.063864] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:31.437 16:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.437 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:31.437 16:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:31.437 16:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:31.437 16:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:31.437 16:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:31.437 16:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:31.437 16:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:31.437 16:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.437 16:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.437 16:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.437 16:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:31.437 16:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.437 16:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.437 [ 00:16:31.437 { 00:16:31.437 "name": "BaseBdev3", 00:16:31.437 "aliases": [ 00:16:31.437 "716915e4-ab31-484e-979b-d93e18dc276e" 00:16:31.437 ], 00:16:31.437 "product_name": "Malloc disk", 00:16:31.437 "block_size": 512, 00:16:31.437 "num_blocks": 65536, 00:16:31.437 "uuid": "716915e4-ab31-484e-979b-d93e18dc276e", 00:16:31.437 "assigned_rate_limits": { 00:16:31.437 "rw_ios_per_sec": 0, 00:16:31.437 "rw_mbytes_per_sec": 0, 00:16:31.437 "r_mbytes_per_sec": 0, 00:16:31.437 "w_mbytes_per_sec": 0 00:16:31.437 }, 00:16:31.437 "claimed": true, 00:16:31.437 "claim_type": "exclusive_write", 00:16:31.437 "zoned": false, 00:16:31.437 "supported_io_types": { 00:16:31.437 "read": true, 00:16:31.437 "write": true, 00:16:31.437 "unmap": true, 00:16:31.437 "flush": true, 00:16:31.437 "reset": true, 00:16:31.437 "nvme_admin": false, 00:16:31.437 "nvme_io": false, 00:16:31.437 "nvme_io_md": false, 00:16:31.437 "write_zeroes": true, 00:16:31.437 "zcopy": true, 00:16:31.437 "get_zone_info": false, 00:16:31.437 "zone_management": false, 00:16:31.437 "zone_append": false, 00:16:31.437 "compare": false, 00:16:31.437 "compare_and_write": false, 00:16:31.437 "abort": true, 00:16:31.437 "seek_hole": false, 00:16:31.437 "seek_data": false, 00:16:31.437 "copy": true, 00:16:31.437 "nvme_iov_md": false 00:16:31.437 }, 00:16:31.437 "memory_domains": [ 00:16:31.437 { 00:16:31.437 "dma_device_id": "system", 00:16:31.437 "dma_device_type": 1 00:16:31.437 }, 00:16:31.437 { 00:16:31.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.437 "dma_device_type": 2 00:16:31.437 } 00:16:31.437 ], 00:16:31.437 "driver_specific": {} 00:16:31.437 } 00:16:31.437 ] 00:16:31.437 16:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.437 16:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:31.437 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:31.437 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:31.437 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:31.437 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:31.437 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.437 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:31.437 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.437 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:31.437 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.437 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.437 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.437 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.437 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.437 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.437 16:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.437 16:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.437 16:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.437 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.437 "name": "Existed_Raid", 00:16:31.437 "uuid": "c99ddf98-3420-44ab-8a0f-80fb4e12a668", 00:16:31.437 "strip_size_kb": 64, 00:16:31.437 "state": "online", 00:16:31.437 "raid_level": "raid5f", 00:16:31.437 "superblock": true, 00:16:31.437 "num_base_bdevs": 3, 00:16:31.437 "num_base_bdevs_discovered": 3, 00:16:31.437 "num_base_bdevs_operational": 3, 00:16:31.437 "base_bdevs_list": [ 00:16:31.437 { 00:16:31.437 "name": "BaseBdev1", 00:16:31.437 "uuid": "a526025b-9199-435e-bd17-0cec302a2aed", 00:16:31.437 "is_configured": true, 00:16:31.437 "data_offset": 2048, 00:16:31.437 "data_size": 63488 00:16:31.437 }, 00:16:31.437 { 00:16:31.437 "name": "BaseBdev2", 00:16:31.437 "uuid": "06384227-8df2-4d80-a666-5fd05c13cd70", 00:16:31.437 "is_configured": true, 00:16:31.437 "data_offset": 2048, 00:16:31.437 "data_size": 63488 00:16:31.437 }, 00:16:31.437 { 00:16:31.437 "name": "BaseBdev3", 00:16:31.437 "uuid": "716915e4-ab31-484e-979b-d93e18dc276e", 00:16:31.437 "is_configured": true, 00:16:31.437 "data_offset": 2048, 00:16:31.437 "data_size": 63488 00:16:31.437 } 00:16:31.437 ] 00:16:31.437 }' 00:16:31.437 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.437 16:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.033 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:32.033 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:32.033 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:32.033 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:32.033 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:32.033 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:32.033 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:32.033 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:32.033 16:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.033 16:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.033 [2024-12-06 16:32:13.561652] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:32.033 16:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.033 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:32.033 "name": "Existed_Raid", 00:16:32.033 "aliases": [ 00:16:32.033 "c99ddf98-3420-44ab-8a0f-80fb4e12a668" 00:16:32.033 ], 00:16:32.033 "product_name": "Raid Volume", 00:16:32.033 "block_size": 512, 00:16:32.033 "num_blocks": 126976, 00:16:32.033 "uuid": "c99ddf98-3420-44ab-8a0f-80fb4e12a668", 00:16:32.033 "assigned_rate_limits": { 00:16:32.033 "rw_ios_per_sec": 0, 00:16:32.033 "rw_mbytes_per_sec": 0, 00:16:32.033 "r_mbytes_per_sec": 0, 00:16:32.033 "w_mbytes_per_sec": 0 00:16:32.033 }, 00:16:32.033 "claimed": false, 00:16:32.033 "zoned": false, 00:16:32.033 "supported_io_types": { 00:16:32.033 "read": true, 00:16:32.033 "write": true, 00:16:32.033 "unmap": false, 00:16:32.033 "flush": false, 00:16:32.033 "reset": true, 00:16:32.033 "nvme_admin": false, 00:16:32.033 "nvme_io": false, 00:16:32.033 "nvme_io_md": false, 00:16:32.033 "write_zeroes": true, 00:16:32.033 "zcopy": false, 00:16:32.033 "get_zone_info": false, 00:16:32.033 "zone_management": false, 00:16:32.033 "zone_append": false, 00:16:32.033 "compare": false, 00:16:32.033 "compare_and_write": false, 00:16:32.033 "abort": false, 00:16:32.033 "seek_hole": false, 00:16:32.033 "seek_data": false, 00:16:32.033 "copy": false, 00:16:32.033 "nvme_iov_md": false 00:16:32.033 }, 00:16:32.033 "driver_specific": { 00:16:32.033 "raid": { 00:16:32.033 "uuid": "c99ddf98-3420-44ab-8a0f-80fb4e12a668", 00:16:32.033 "strip_size_kb": 64, 00:16:32.033 "state": "online", 00:16:32.033 "raid_level": "raid5f", 00:16:32.033 "superblock": true, 00:16:32.033 "num_base_bdevs": 3, 00:16:32.033 "num_base_bdevs_discovered": 3, 00:16:32.033 "num_base_bdevs_operational": 3, 00:16:32.033 "base_bdevs_list": [ 00:16:32.034 { 00:16:32.034 "name": "BaseBdev1", 00:16:32.034 "uuid": "a526025b-9199-435e-bd17-0cec302a2aed", 00:16:32.034 "is_configured": true, 00:16:32.034 "data_offset": 2048, 00:16:32.034 "data_size": 63488 00:16:32.034 }, 00:16:32.034 { 00:16:32.034 "name": "BaseBdev2", 00:16:32.034 "uuid": "06384227-8df2-4d80-a666-5fd05c13cd70", 00:16:32.034 "is_configured": true, 00:16:32.034 "data_offset": 2048, 00:16:32.034 "data_size": 63488 00:16:32.034 }, 00:16:32.034 { 00:16:32.034 "name": "BaseBdev3", 00:16:32.034 "uuid": "716915e4-ab31-484e-979b-d93e18dc276e", 00:16:32.034 "is_configured": true, 00:16:32.034 "data_offset": 2048, 00:16:32.034 "data_size": 63488 00:16:32.034 } 00:16:32.034 ] 00:16:32.034 } 00:16:32.034 } 00:16:32.034 }' 00:16:32.034 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:32.034 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:32.034 BaseBdev2 00:16:32.034 BaseBdev3' 00:16:32.034 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.034 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:32.034 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:32.034 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.034 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:32.034 16:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.034 16:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.034 16:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.034 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:32.034 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:32.034 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:32.034 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:32.034 16:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.034 16:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.034 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.034 16:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.034 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:32.034 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:32.034 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:32.034 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.034 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:32.034 16:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.034 16:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.034 16:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.034 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:32.034 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:32.034 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:32.034 16:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.034 16:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.034 [2024-12-06 16:32:13.849006] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:32.034 16:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.034 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:32.034 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:32.034 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:32.034 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:32.034 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:32.034 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:16:32.034 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:32.034 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.034 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:32.034 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:32.034 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:32.034 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.034 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.034 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.034 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.291 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.291 16:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.291 16:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.291 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.291 16:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.291 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.291 "name": "Existed_Raid", 00:16:32.291 "uuid": "c99ddf98-3420-44ab-8a0f-80fb4e12a668", 00:16:32.291 "strip_size_kb": 64, 00:16:32.291 "state": "online", 00:16:32.291 "raid_level": "raid5f", 00:16:32.291 "superblock": true, 00:16:32.291 "num_base_bdevs": 3, 00:16:32.291 "num_base_bdevs_discovered": 2, 00:16:32.291 "num_base_bdevs_operational": 2, 00:16:32.291 "base_bdevs_list": [ 00:16:32.291 { 00:16:32.291 "name": null, 00:16:32.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.291 "is_configured": false, 00:16:32.291 "data_offset": 0, 00:16:32.291 "data_size": 63488 00:16:32.291 }, 00:16:32.291 { 00:16:32.291 "name": "BaseBdev2", 00:16:32.291 "uuid": "06384227-8df2-4d80-a666-5fd05c13cd70", 00:16:32.291 "is_configured": true, 00:16:32.291 "data_offset": 2048, 00:16:32.291 "data_size": 63488 00:16:32.291 }, 00:16:32.291 { 00:16:32.291 "name": "BaseBdev3", 00:16:32.291 "uuid": "716915e4-ab31-484e-979b-d93e18dc276e", 00:16:32.291 "is_configured": true, 00:16:32.291 "data_offset": 2048, 00:16:32.291 "data_size": 63488 00:16:32.291 } 00:16:32.291 ] 00:16:32.291 }' 00:16:32.291 16:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.291 16:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.548 16:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:32.548 16:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:32.548 16:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.548 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.548 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.548 16:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:32.548 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.548 16:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:32.548 16:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:32.548 16:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:32.548 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.548 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.548 [2024-12-06 16:32:14.363689] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:32.548 [2024-12-06 16:32:14.363945] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:32.548 [2024-12-06 16:32:14.376057] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:32.548 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.548 16:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:32.548 16:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:32.548 16:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:32.548 16:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.548 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.548 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.807 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.807 16:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:32.807 16:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:32.807 16:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:32.807 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.807 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.807 [2024-12-06 16:32:14.436070] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:32.807 [2024-12-06 16:32:14.436218] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:16:32.807 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.807 16:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:32.807 16:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:32.807 16:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.807 16:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:32.807 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.807 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.807 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.807 16:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:32.807 16:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:32.807 16:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:16:32.807 16:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:32.807 16:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:32.807 16:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:32.807 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.807 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.807 BaseBdev2 00:16:32.807 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.807 16:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:32.807 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:32.807 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:32.807 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:32.807 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:32.807 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:32.807 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:32.807 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.807 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.807 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.807 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:32.807 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.807 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.807 [ 00:16:32.807 { 00:16:32.807 "name": "BaseBdev2", 00:16:32.807 "aliases": [ 00:16:32.807 "52765134-23a8-4e44-9929-22cc062f0f78" 00:16:32.807 ], 00:16:32.807 "product_name": "Malloc disk", 00:16:32.807 "block_size": 512, 00:16:32.807 "num_blocks": 65536, 00:16:32.807 "uuid": "52765134-23a8-4e44-9929-22cc062f0f78", 00:16:32.807 "assigned_rate_limits": { 00:16:32.807 "rw_ios_per_sec": 0, 00:16:32.807 "rw_mbytes_per_sec": 0, 00:16:32.807 "r_mbytes_per_sec": 0, 00:16:32.807 "w_mbytes_per_sec": 0 00:16:32.807 }, 00:16:32.807 "claimed": false, 00:16:32.807 "zoned": false, 00:16:32.807 "supported_io_types": { 00:16:32.807 "read": true, 00:16:32.807 "write": true, 00:16:32.807 "unmap": true, 00:16:32.807 "flush": true, 00:16:32.807 "reset": true, 00:16:32.807 "nvme_admin": false, 00:16:32.807 "nvme_io": false, 00:16:32.807 "nvme_io_md": false, 00:16:32.807 "write_zeroes": true, 00:16:32.807 "zcopy": true, 00:16:32.807 "get_zone_info": false, 00:16:32.807 "zone_management": false, 00:16:32.807 "zone_append": false, 00:16:32.807 "compare": false, 00:16:32.807 "compare_and_write": false, 00:16:32.807 "abort": true, 00:16:32.807 "seek_hole": false, 00:16:32.807 "seek_data": false, 00:16:32.807 "copy": true, 00:16:32.807 "nvme_iov_md": false 00:16:32.807 }, 00:16:32.807 "memory_domains": [ 00:16:32.807 { 00:16:32.807 "dma_device_id": "system", 00:16:32.807 "dma_device_type": 1 00:16:32.807 }, 00:16:32.807 { 00:16:32.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.807 "dma_device_type": 2 00:16:32.807 } 00:16:32.807 ], 00:16:32.808 "driver_specific": {} 00:16:32.808 } 00:16:32.808 ] 00:16:32.808 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.808 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:32.808 16:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:32.808 16:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:32.808 16:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:32.808 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.808 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.808 BaseBdev3 00:16:32.808 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.808 16:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:32.808 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:32.808 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:32.808 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:32.808 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:32.808 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:32.808 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:32.808 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.808 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.808 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.808 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:32.808 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.808 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.808 [ 00:16:32.808 { 00:16:32.808 "name": "BaseBdev3", 00:16:32.808 "aliases": [ 00:16:32.808 "36971d50-2f3a-4add-9586-e7ba9df1ae39" 00:16:32.808 ], 00:16:32.808 "product_name": "Malloc disk", 00:16:32.808 "block_size": 512, 00:16:32.808 "num_blocks": 65536, 00:16:32.808 "uuid": "36971d50-2f3a-4add-9586-e7ba9df1ae39", 00:16:32.808 "assigned_rate_limits": { 00:16:32.808 "rw_ios_per_sec": 0, 00:16:32.808 "rw_mbytes_per_sec": 0, 00:16:32.808 "r_mbytes_per_sec": 0, 00:16:32.808 "w_mbytes_per_sec": 0 00:16:32.808 }, 00:16:32.808 "claimed": false, 00:16:32.808 "zoned": false, 00:16:32.808 "supported_io_types": { 00:16:32.808 "read": true, 00:16:32.808 "write": true, 00:16:32.808 "unmap": true, 00:16:32.808 "flush": true, 00:16:32.808 "reset": true, 00:16:32.808 "nvme_admin": false, 00:16:32.808 "nvme_io": false, 00:16:32.808 "nvme_io_md": false, 00:16:32.808 "write_zeroes": true, 00:16:32.808 "zcopy": true, 00:16:32.808 "get_zone_info": false, 00:16:32.808 "zone_management": false, 00:16:32.808 "zone_append": false, 00:16:32.808 "compare": false, 00:16:32.808 "compare_and_write": false, 00:16:32.808 "abort": true, 00:16:32.808 "seek_hole": false, 00:16:32.808 "seek_data": false, 00:16:32.808 "copy": true, 00:16:32.808 "nvme_iov_md": false 00:16:32.808 }, 00:16:32.808 "memory_domains": [ 00:16:32.808 { 00:16:32.808 "dma_device_id": "system", 00:16:32.808 "dma_device_type": 1 00:16:32.808 }, 00:16:32.808 { 00:16:32.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.808 "dma_device_type": 2 00:16:32.808 } 00:16:32.808 ], 00:16:32.808 "driver_specific": {} 00:16:32.808 } 00:16:32.808 ] 00:16:32.808 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.808 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:32.808 16:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:32.808 16:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:32.808 16:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:32.808 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.808 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.808 [2024-12-06 16:32:14.609639] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:32.808 [2024-12-06 16:32:14.609756] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:32.808 [2024-12-06 16:32:14.609811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:32.808 [2024-12-06 16:32:14.611755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:32.808 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.808 16:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:32.808 16:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:32.808 16:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:32.808 16:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:32.808 16:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:32.808 16:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:32.808 16:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.808 16:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.808 16:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.808 16:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.808 16:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.808 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.808 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.808 16:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.808 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.067 16:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.067 "name": "Existed_Raid", 00:16:33.067 "uuid": "04ae84f5-f156-4cb8-9c7d-49a2e55d53dd", 00:16:33.067 "strip_size_kb": 64, 00:16:33.067 "state": "configuring", 00:16:33.067 "raid_level": "raid5f", 00:16:33.067 "superblock": true, 00:16:33.067 "num_base_bdevs": 3, 00:16:33.067 "num_base_bdevs_discovered": 2, 00:16:33.067 "num_base_bdevs_operational": 3, 00:16:33.067 "base_bdevs_list": [ 00:16:33.067 { 00:16:33.067 "name": "BaseBdev1", 00:16:33.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.067 "is_configured": false, 00:16:33.067 "data_offset": 0, 00:16:33.067 "data_size": 0 00:16:33.067 }, 00:16:33.067 { 00:16:33.067 "name": "BaseBdev2", 00:16:33.067 "uuid": "52765134-23a8-4e44-9929-22cc062f0f78", 00:16:33.067 "is_configured": true, 00:16:33.067 "data_offset": 2048, 00:16:33.067 "data_size": 63488 00:16:33.067 }, 00:16:33.067 { 00:16:33.067 "name": "BaseBdev3", 00:16:33.067 "uuid": "36971d50-2f3a-4add-9586-e7ba9df1ae39", 00:16:33.067 "is_configured": true, 00:16:33.067 "data_offset": 2048, 00:16:33.067 "data_size": 63488 00:16:33.067 } 00:16:33.067 ] 00:16:33.067 }' 00:16:33.067 16:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.067 16:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.326 16:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:33.326 16:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.326 16:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.326 [2024-12-06 16:32:15.052909] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:33.326 16:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.326 16:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:33.326 16:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:33.326 16:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:33.326 16:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:33.326 16:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:33.326 16:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:33.326 16:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.326 16:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.326 16:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.326 16:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.326 16:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.326 16:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.326 16:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.326 16:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.326 16:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.326 16:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.326 "name": "Existed_Raid", 00:16:33.326 "uuid": "04ae84f5-f156-4cb8-9c7d-49a2e55d53dd", 00:16:33.326 "strip_size_kb": 64, 00:16:33.326 "state": "configuring", 00:16:33.326 "raid_level": "raid5f", 00:16:33.326 "superblock": true, 00:16:33.326 "num_base_bdevs": 3, 00:16:33.326 "num_base_bdevs_discovered": 1, 00:16:33.326 "num_base_bdevs_operational": 3, 00:16:33.326 "base_bdevs_list": [ 00:16:33.326 { 00:16:33.326 "name": "BaseBdev1", 00:16:33.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.326 "is_configured": false, 00:16:33.326 "data_offset": 0, 00:16:33.326 "data_size": 0 00:16:33.326 }, 00:16:33.326 { 00:16:33.326 "name": null, 00:16:33.326 "uuid": "52765134-23a8-4e44-9929-22cc062f0f78", 00:16:33.326 "is_configured": false, 00:16:33.326 "data_offset": 0, 00:16:33.326 "data_size": 63488 00:16:33.326 }, 00:16:33.326 { 00:16:33.326 "name": "BaseBdev3", 00:16:33.326 "uuid": "36971d50-2f3a-4add-9586-e7ba9df1ae39", 00:16:33.326 "is_configured": true, 00:16:33.326 "data_offset": 2048, 00:16:33.326 "data_size": 63488 00:16:33.326 } 00:16:33.326 ] 00:16:33.326 }' 00:16:33.326 16:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.326 16:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.893 16:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.893 16:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:33.893 16:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.893 16:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.893 16:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.893 16:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:33.893 16:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:33.893 16:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.893 16:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.893 [2024-12-06 16:32:15.599697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:33.893 BaseBdev1 00:16:33.893 16:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.893 16:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:33.893 16:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:33.893 16:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:33.893 16:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:33.893 16:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:33.893 16:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:33.893 16:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:33.893 16:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.893 16:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.893 16:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.893 16:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:33.893 16:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.893 16:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.893 [ 00:16:33.893 { 00:16:33.893 "name": "BaseBdev1", 00:16:33.893 "aliases": [ 00:16:33.893 "a6144d70-3c67-4e83-be11-769980175b7e" 00:16:33.893 ], 00:16:33.893 "product_name": "Malloc disk", 00:16:33.893 "block_size": 512, 00:16:33.893 "num_blocks": 65536, 00:16:33.893 "uuid": "a6144d70-3c67-4e83-be11-769980175b7e", 00:16:33.893 "assigned_rate_limits": { 00:16:33.893 "rw_ios_per_sec": 0, 00:16:33.893 "rw_mbytes_per_sec": 0, 00:16:33.893 "r_mbytes_per_sec": 0, 00:16:33.893 "w_mbytes_per_sec": 0 00:16:33.893 }, 00:16:33.893 "claimed": true, 00:16:33.893 "claim_type": "exclusive_write", 00:16:33.893 "zoned": false, 00:16:33.893 "supported_io_types": { 00:16:33.893 "read": true, 00:16:33.893 "write": true, 00:16:33.893 "unmap": true, 00:16:33.893 "flush": true, 00:16:33.893 "reset": true, 00:16:33.893 "nvme_admin": false, 00:16:33.893 "nvme_io": false, 00:16:33.893 "nvme_io_md": false, 00:16:33.893 "write_zeroes": true, 00:16:33.893 "zcopy": true, 00:16:33.893 "get_zone_info": false, 00:16:33.893 "zone_management": false, 00:16:33.893 "zone_append": false, 00:16:33.893 "compare": false, 00:16:33.893 "compare_and_write": false, 00:16:33.893 "abort": true, 00:16:33.893 "seek_hole": false, 00:16:33.893 "seek_data": false, 00:16:33.893 "copy": true, 00:16:33.893 "nvme_iov_md": false 00:16:33.893 }, 00:16:33.893 "memory_domains": [ 00:16:33.893 { 00:16:33.893 "dma_device_id": "system", 00:16:33.893 "dma_device_type": 1 00:16:33.893 }, 00:16:33.893 { 00:16:33.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.893 "dma_device_type": 2 00:16:33.893 } 00:16:33.893 ], 00:16:33.893 "driver_specific": {} 00:16:33.893 } 00:16:33.893 ] 00:16:33.893 16:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.893 16:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:33.893 16:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:33.893 16:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:33.893 16:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:33.893 16:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:33.893 16:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:33.893 16:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:33.893 16:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.893 16:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.893 16:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.893 16:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.893 16:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.893 16:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.893 16:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.893 16:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.893 16:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.893 16:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.893 "name": "Existed_Raid", 00:16:33.893 "uuid": "04ae84f5-f156-4cb8-9c7d-49a2e55d53dd", 00:16:33.893 "strip_size_kb": 64, 00:16:33.893 "state": "configuring", 00:16:33.893 "raid_level": "raid5f", 00:16:33.893 "superblock": true, 00:16:33.893 "num_base_bdevs": 3, 00:16:33.893 "num_base_bdevs_discovered": 2, 00:16:33.893 "num_base_bdevs_operational": 3, 00:16:33.893 "base_bdevs_list": [ 00:16:33.893 { 00:16:33.893 "name": "BaseBdev1", 00:16:33.893 "uuid": "a6144d70-3c67-4e83-be11-769980175b7e", 00:16:33.893 "is_configured": true, 00:16:33.893 "data_offset": 2048, 00:16:33.893 "data_size": 63488 00:16:33.893 }, 00:16:33.893 { 00:16:33.893 "name": null, 00:16:33.893 "uuid": "52765134-23a8-4e44-9929-22cc062f0f78", 00:16:33.893 "is_configured": false, 00:16:33.893 "data_offset": 0, 00:16:33.893 "data_size": 63488 00:16:33.893 }, 00:16:33.893 { 00:16:33.893 "name": "BaseBdev3", 00:16:33.893 "uuid": "36971d50-2f3a-4add-9586-e7ba9df1ae39", 00:16:33.893 "is_configured": true, 00:16:33.893 "data_offset": 2048, 00:16:33.893 "data_size": 63488 00:16:33.893 } 00:16:33.893 ] 00:16:33.893 }' 00:16:33.893 16:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.893 16:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.461 16:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:34.461 16:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.461 16:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.461 16:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.461 16:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.461 16:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:34.461 16:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:34.461 16:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.461 16:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.461 [2024-12-06 16:32:16.102973] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:34.461 16:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.461 16:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:34.461 16:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:34.461 16:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:34.461 16:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:34.461 16:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:34.461 16:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:34.461 16:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.461 16:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.461 16:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.461 16:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.461 16:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.461 16:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.461 16:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.461 16:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.461 16:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.461 16:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.461 "name": "Existed_Raid", 00:16:34.461 "uuid": "04ae84f5-f156-4cb8-9c7d-49a2e55d53dd", 00:16:34.461 "strip_size_kb": 64, 00:16:34.461 "state": "configuring", 00:16:34.461 "raid_level": "raid5f", 00:16:34.461 "superblock": true, 00:16:34.461 "num_base_bdevs": 3, 00:16:34.461 "num_base_bdevs_discovered": 1, 00:16:34.461 "num_base_bdevs_operational": 3, 00:16:34.461 "base_bdevs_list": [ 00:16:34.461 { 00:16:34.461 "name": "BaseBdev1", 00:16:34.461 "uuid": "a6144d70-3c67-4e83-be11-769980175b7e", 00:16:34.461 "is_configured": true, 00:16:34.461 "data_offset": 2048, 00:16:34.461 "data_size": 63488 00:16:34.461 }, 00:16:34.461 { 00:16:34.461 "name": null, 00:16:34.461 "uuid": "52765134-23a8-4e44-9929-22cc062f0f78", 00:16:34.461 "is_configured": false, 00:16:34.461 "data_offset": 0, 00:16:34.461 "data_size": 63488 00:16:34.461 }, 00:16:34.461 { 00:16:34.461 "name": null, 00:16:34.461 "uuid": "36971d50-2f3a-4add-9586-e7ba9df1ae39", 00:16:34.461 "is_configured": false, 00:16:34.461 "data_offset": 0, 00:16:34.461 "data_size": 63488 00:16:34.461 } 00:16:34.461 ] 00:16:34.461 }' 00:16:34.461 16:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.461 16:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.720 16:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.720 16:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:34.720 16:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.720 16:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.980 16:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.980 16:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:34.980 16:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:34.980 16:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.980 16:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.980 [2024-12-06 16:32:16.606135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:34.980 16:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.980 16:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:34.980 16:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:34.980 16:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:34.980 16:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:34.980 16:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:34.980 16:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:34.980 16:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.980 16:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.980 16:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.980 16:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.980 16:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.980 16:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.980 16:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.980 16:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.980 16:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.980 16:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.980 "name": "Existed_Raid", 00:16:34.980 "uuid": "04ae84f5-f156-4cb8-9c7d-49a2e55d53dd", 00:16:34.980 "strip_size_kb": 64, 00:16:34.980 "state": "configuring", 00:16:34.980 "raid_level": "raid5f", 00:16:34.980 "superblock": true, 00:16:34.980 "num_base_bdevs": 3, 00:16:34.980 "num_base_bdevs_discovered": 2, 00:16:34.980 "num_base_bdevs_operational": 3, 00:16:34.980 "base_bdevs_list": [ 00:16:34.980 { 00:16:34.980 "name": "BaseBdev1", 00:16:34.980 "uuid": "a6144d70-3c67-4e83-be11-769980175b7e", 00:16:34.980 "is_configured": true, 00:16:34.980 "data_offset": 2048, 00:16:34.980 "data_size": 63488 00:16:34.980 }, 00:16:34.980 { 00:16:34.980 "name": null, 00:16:34.980 "uuid": "52765134-23a8-4e44-9929-22cc062f0f78", 00:16:34.980 "is_configured": false, 00:16:34.980 "data_offset": 0, 00:16:34.980 "data_size": 63488 00:16:34.980 }, 00:16:34.980 { 00:16:34.980 "name": "BaseBdev3", 00:16:34.980 "uuid": "36971d50-2f3a-4add-9586-e7ba9df1ae39", 00:16:34.980 "is_configured": true, 00:16:34.980 "data_offset": 2048, 00:16:34.980 "data_size": 63488 00:16:34.980 } 00:16:34.980 ] 00:16:34.980 }' 00:16:34.980 16:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.980 16:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.547 16:32:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.547 16:32:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.547 16:32:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.547 16:32:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:35.547 16:32:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.547 16:32:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:35.547 16:32:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:35.547 16:32:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.547 16:32:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.547 [2024-12-06 16:32:17.149332] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:35.547 16:32:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.547 16:32:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:35.547 16:32:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:35.547 16:32:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:35.547 16:32:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:35.547 16:32:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:35.547 16:32:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:35.547 16:32:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.547 16:32:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.547 16:32:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.547 16:32:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.547 16:32:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.547 16:32:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.547 16:32:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.547 16:32:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.547 16:32:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.547 16:32:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.547 "name": "Existed_Raid", 00:16:35.547 "uuid": "04ae84f5-f156-4cb8-9c7d-49a2e55d53dd", 00:16:35.547 "strip_size_kb": 64, 00:16:35.547 "state": "configuring", 00:16:35.547 "raid_level": "raid5f", 00:16:35.548 "superblock": true, 00:16:35.548 "num_base_bdevs": 3, 00:16:35.548 "num_base_bdevs_discovered": 1, 00:16:35.548 "num_base_bdevs_operational": 3, 00:16:35.548 "base_bdevs_list": [ 00:16:35.548 { 00:16:35.548 "name": null, 00:16:35.548 "uuid": "a6144d70-3c67-4e83-be11-769980175b7e", 00:16:35.548 "is_configured": false, 00:16:35.548 "data_offset": 0, 00:16:35.548 "data_size": 63488 00:16:35.548 }, 00:16:35.548 { 00:16:35.548 "name": null, 00:16:35.548 "uuid": "52765134-23a8-4e44-9929-22cc062f0f78", 00:16:35.548 "is_configured": false, 00:16:35.548 "data_offset": 0, 00:16:35.548 "data_size": 63488 00:16:35.548 }, 00:16:35.548 { 00:16:35.548 "name": "BaseBdev3", 00:16:35.548 "uuid": "36971d50-2f3a-4add-9586-e7ba9df1ae39", 00:16:35.548 "is_configured": true, 00:16:35.548 "data_offset": 2048, 00:16:35.548 "data_size": 63488 00:16:35.548 } 00:16:35.548 ] 00:16:35.548 }' 00:16:35.548 16:32:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.548 16:32:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.807 16:32:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.807 16:32:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.807 16:32:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:35.807 16:32:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.807 16:32:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.807 16:32:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:35.807 16:32:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:35.807 16:32:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.807 16:32:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.807 [2024-12-06 16:32:17.631546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:35.807 16:32:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.807 16:32:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:35.807 16:32:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:35.807 16:32:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:35.807 16:32:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:35.807 16:32:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:35.807 16:32:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:35.807 16:32:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.807 16:32:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.807 16:32:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.807 16:32:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.807 16:32:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.807 16:32:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.807 16:32:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.807 16:32:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.066 16:32:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.066 16:32:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.066 "name": "Existed_Raid", 00:16:36.066 "uuid": "04ae84f5-f156-4cb8-9c7d-49a2e55d53dd", 00:16:36.066 "strip_size_kb": 64, 00:16:36.066 "state": "configuring", 00:16:36.066 "raid_level": "raid5f", 00:16:36.066 "superblock": true, 00:16:36.066 "num_base_bdevs": 3, 00:16:36.066 "num_base_bdevs_discovered": 2, 00:16:36.066 "num_base_bdevs_operational": 3, 00:16:36.066 "base_bdevs_list": [ 00:16:36.066 { 00:16:36.066 "name": null, 00:16:36.066 "uuid": "a6144d70-3c67-4e83-be11-769980175b7e", 00:16:36.066 "is_configured": false, 00:16:36.066 "data_offset": 0, 00:16:36.066 "data_size": 63488 00:16:36.066 }, 00:16:36.066 { 00:16:36.066 "name": "BaseBdev2", 00:16:36.066 "uuid": "52765134-23a8-4e44-9929-22cc062f0f78", 00:16:36.066 "is_configured": true, 00:16:36.066 "data_offset": 2048, 00:16:36.066 "data_size": 63488 00:16:36.066 }, 00:16:36.066 { 00:16:36.066 "name": "BaseBdev3", 00:16:36.066 "uuid": "36971d50-2f3a-4add-9586-e7ba9df1ae39", 00:16:36.066 "is_configured": true, 00:16:36.066 "data_offset": 2048, 00:16:36.066 "data_size": 63488 00:16:36.066 } 00:16:36.066 ] 00:16:36.066 }' 00:16:36.066 16:32:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.066 16:32:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.325 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.325 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.326 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.326 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:36.326 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.326 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:36.326 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:36.326 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.326 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.326 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.326 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.326 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a6144d70-3c67-4e83-be11-769980175b7e 00:16:36.326 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.326 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.587 [2024-12-06 16:32:18.169826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:36.587 [2024-12-06 16:32:18.170009] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:16:36.587 [2024-12-06 16:32:18.170027] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:36.587 [2024-12-06 16:32:18.170313] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:36.587 NewBaseBdev 00:16:36.587 [2024-12-06 16:32:18.170776] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:16:36.587 [2024-12-06 16:32:18.170789] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:16:36.587 [2024-12-06 16:32:18.170906] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:36.587 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.587 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:36.587 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:36.587 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:36.587 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:36.587 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:36.587 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:36.587 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:36.587 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.587 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.587 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.587 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:36.587 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.587 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.587 [ 00:16:36.587 { 00:16:36.587 "name": "NewBaseBdev", 00:16:36.587 "aliases": [ 00:16:36.587 "a6144d70-3c67-4e83-be11-769980175b7e" 00:16:36.587 ], 00:16:36.587 "product_name": "Malloc disk", 00:16:36.587 "block_size": 512, 00:16:36.587 "num_blocks": 65536, 00:16:36.587 "uuid": "a6144d70-3c67-4e83-be11-769980175b7e", 00:16:36.587 "assigned_rate_limits": { 00:16:36.587 "rw_ios_per_sec": 0, 00:16:36.587 "rw_mbytes_per_sec": 0, 00:16:36.587 "r_mbytes_per_sec": 0, 00:16:36.587 "w_mbytes_per_sec": 0 00:16:36.587 }, 00:16:36.587 "claimed": true, 00:16:36.587 "claim_type": "exclusive_write", 00:16:36.587 "zoned": false, 00:16:36.587 "supported_io_types": { 00:16:36.587 "read": true, 00:16:36.587 "write": true, 00:16:36.587 "unmap": true, 00:16:36.587 "flush": true, 00:16:36.587 "reset": true, 00:16:36.587 "nvme_admin": false, 00:16:36.587 "nvme_io": false, 00:16:36.587 "nvme_io_md": false, 00:16:36.587 "write_zeroes": true, 00:16:36.587 "zcopy": true, 00:16:36.587 "get_zone_info": false, 00:16:36.587 "zone_management": false, 00:16:36.587 "zone_append": false, 00:16:36.587 "compare": false, 00:16:36.587 "compare_and_write": false, 00:16:36.587 "abort": true, 00:16:36.587 "seek_hole": false, 00:16:36.587 "seek_data": false, 00:16:36.587 "copy": true, 00:16:36.587 "nvme_iov_md": false 00:16:36.587 }, 00:16:36.587 "memory_domains": [ 00:16:36.587 { 00:16:36.587 "dma_device_id": "system", 00:16:36.587 "dma_device_type": 1 00:16:36.587 }, 00:16:36.587 { 00:16:36.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.587 "dma_device_type": 2 00:16:36.587 } 00:16:36.587 ], 00:16:36.587 "driver_specific": {} 00:16:36.587 } 00:16:36.587 ] 00:16:36.587 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.587 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:36.587 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:36.587 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:36.587 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:36.587 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:36.587 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:36.587 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:36.587 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.587 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.587 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.587 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.587 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.587 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.587 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.587 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.587 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.587 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.587 "name": "Existed_Raid", 00:16:36.587 "uuid": "04ae84f5-f156-4cb8-9c7d-49a2e55d53dd", 00:16:36.587 "strip_size_kb": 64, 00:16:36.587 "state": "online", 00:16:36.587 "raid_level": "raid5f", 00:16:36.587 "superblock": true, 00:16:36.587 "num_base_bdevs": 3, 00:16:36.587 "num_base_bdevs_discovered": 3, 00:16:36.587 "num_base_bdevs_operational": 3, 00:16:36.587 "base_bdevs_list": [ 00:16:36.587 { 00:16:36.587 "name": "NewBaseBdev", 00:16:36.587 "uuid": "a6144d70-3c67-4e83-be11-769980175b7e", 00:16:36.587 "is_configured": true, 00:16:36.587 "data_offset": 2048, 00:16:36.587 "data_size": 63488 00:16:36.587 }, 00:16:36.587 { 00:16:36.587 "name": "BaseBdev2", 00:16:36.587 "uuid": "52765134-23a8-4e44-9929-22cc062f0f78", 00:16:36.587 "is_configured": true, 00:16:36.587 "data_offset": 2048, 00:16:36.587 "data_size": 63488 00:16:36.587 }, 00:16:36.587 { 00:16:36.588 "name": "BaseBdev3", 00:16:36.588 "uuid": "36971d50-2f3a-4add-9586-e7ba9df1ae39", 00:16:36.588 "is_configured": true, 00:16:36.588 "data_offset": 2048, 00:16:36.588 "data_size": 63488 00:16:36.588 } 00:16:36.588 ] 00:16:36.588 }' 00:16:36.588 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.588 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.847 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:36.847 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:36.847 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:36.847 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:36.847 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:36.847 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:36.847 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:36.847 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:36.847 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.847 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.847 [2024-12-06 16:32:18.665328] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:36.847 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.105 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:37.105 "name": "Existed_Raid", 00:16:37.105 "aliases": [ 00:16:37.105 "04ae84f5-f156-4cb8-9c7d-49a2e55d53dd" 00:16:37.105 ], 00:16:37.105 "product_name": "Raid Volume", 00:16:37.105 "block_size": 512, 00:16:37.106 "num_blocks": 126976, 00:16:37.106 "uuid": "04ae84f5-f156-4cb8-9c7d-49a2e55d53dd", 00:16:37.106 "assigned_rate_limits": { 00:16:37.106 "rw_ios_per_sec": 0, 00:16:37.106 "rw_mbytes_per_sec": 0, 00:16:37.106 "r_mbytes_per_sec": 0, 00:16:37.106 "w_mbytes_per_sec": 0 00:16:37.106 }, 00:16:37.106 "claimed": false, 00:16:37.106 "zoned": false, 00:16:37.106 "supported_io_types": { 00:16:37.106 "read": true, 00:16:37.106 "write": true, 00:16:37.106 "unmap": false, 00:16:37.106 "flush": false, 00:16:37.106 "reset": true, 00:16:37.106 "nvme_admin": false, 00:16:37.106 "nvme_io": false, 00:16:37.106 "nvme_io_md": false, 00:16:37.106 "write_zeroes": true, 00:16:37.106 "zcopy": false, 00:16:37.106 "get_zone_info": false, 00:16:37.106 "zone_management": false, 00:16:37.106 "zone_append": false, 00:16:37.106 "compare": false, 00:16:37.106 "compare_and_write": false, 00:16:37.106 "abort": false, 00:16:37.106 "seek_hole": false, 00:16:37.106 "seek_data": false, 00:16:37.106 "copy": false, 00:16:37.106 "nvme_iov_md": false 00:16:37.106 }, 00:16:37.106 "driver_specific": { 00:16:37.106 "raid": { 00:16:37.106 "uuid": "04ae84f5-f156-4cb8-9c7d-49a2e55d53dd", 00:16:37.106 "strip_size_kb": 64, 00:16:37.106 "state": "online", 00:16:37.106 "raid_level": "raid5f", 00:16:37.106 "superblock": true, 00:16:37.106 "num_base_bdevs": 3, 00:16:37.106 "num_base_bdevs_discovered": 3, 00:16:37.106 "num_base_bdevs_operational": 3, 00:16:37.106 "base_bdevs_list": [ 00:16:37.106 { 00:16:37.106 "name": "NewBaseBdev", 00:16:37.106 "uuid": "a6144d70-3c67-4e83-be11-769980175b7e", 00:16:37.106 "is_configured": true, 00:16:37.106 "data_offset": 2048, 00:16:37.106 "data_size": 63488 00:16:37.106 }, 00:16:37.106 { 00:16:37.106 "name": "BaseBdev2", 00:16:37.106 "uuid": "52765134-23a8-4e44-9929-22cc062f0f78", 00:16:37.106 "is_configured": true, 00:16:37.106 "data_offset": 2048, 00:16:37.106 "data_size": 63488 00:16:37.106 }, 00:16:37.106 { 00:16:37.106 "name": "BaseBdev3", 00:16:37.106 "uuid": "36971d50-2f3a-4add-9586-e7ba9df1ae39", 00:16:37.106 "is_configured": true, 00:16:37.106 "data_offset": 2048, 00:16:37.106 "data_size": 63488 00:16:37.106 } 00:16:37.106 ] 00:16:37.106 } 00:16:37.106 } 00:16:37.106 }' 00:16:37.106 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:37.106 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:37.106 BaseBdev2 00:16:37.106 BaseBdev3' 00:16:37.106 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:37.106 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:37.106 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:37.106 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:37.106 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.106 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.106 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:37.106 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.106 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:37.106 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:37.106 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:37.106 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:37.106 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.106 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.106 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:37.106 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.106 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:37.106 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:37.106 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:37.106 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:37.106 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.106 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.106 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:37.106 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.106 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:37.106 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:37.106 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:37.106 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.106 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.106 [2024-12-06 16:32:18.920619] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:37.106 [2024-12-06 16:32:18.920713] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:37.106 [2024-12-06 16:32:18.920821] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:37.106 [2024-12-06 16:32:18.921108] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:37.106 [2024-12-06 16:32:18.921124] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:16:37.106 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.106 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 91545 00:16:37.106 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 91545 ']' 00:16:37.106 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 91545 00:16:37.106 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:37.106 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:37.106 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91545 00:16:37.365 killing process with pid 91545 00:16:37.365 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:37.365 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:37.365 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91545' 00:16:37.365 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 91545 00:16:37.365 [2024-12-06 16:32:18.970802] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:37.365 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 91545 00:16:37.365 [2024-12-06 16:32:19.003733] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:37.625 16:32:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:37.625 00:16:37.625 real 0m9.080s 00:16:37.625 user 0m15.381s 00:16:37.625 sys 0m1.974s 00:16:37.625 16:32:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:37.625 ************************************ 00:16:37.625 END TEST raid5f_state_function_test_sb 00:16:37.625 ************************************ 00:16:37.625 16:32:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.625 16:32:19 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:16:37.625 16:32:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:37.625 16:32:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:37.625 16:32:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:37.625 ************************************ 00:16:37.625 START TEST raid5f_superblock_test 00:16:37.625 ************************************ 00:16:37.625 16:32:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:16:37.625 16:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:16:37.625 16:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:16:37.625 16:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:37.625 16:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:37.625 16:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:37.625 16:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:37.625 16:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:37.625 16:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:37.625 16:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:37.625 16:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:37.625 16:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:37.625 16:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:37.625 16:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:37.625 16:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:16:37.625 16:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:37.625 16:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:37.625 16:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=92149 00:16:37.625 16:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 92149 00:16:37.625 16:32:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 92149 ']' 00:16:37.625 16:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:37.625 16:32:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:37.625 16:32:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:37.625 16:32:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:37.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:37.625 16:32:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:37.625 16:32:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.625 [2024-12-06 16:32:19.376827] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:16:37.625 [2024-12-06 16:32:19.377079] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92149 ] 00:16:37.883 [2024-12-06 16:32:19.534820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.883 [2024-12-06 16:32:19.564363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.883 [2024-12-06 16:32:19.609867] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:37.883 [2024-12-06 16:32:19.610000] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:38.452 16:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:38.452 16:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:16:38.452 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:38.452 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:38.452 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:38.452 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:38.452 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:38.452 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:38.452 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:38.452 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:38.452 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:38.452 16:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.452 16:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.452 malloc1 00:16:38.452 16:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.452 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:38.452 16:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.452 16:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.452 [2024-12-06 16:32:20.271580] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:38.452 [2024-12-06 16:32:20.271645] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.452 [2024-12-06 16:32:20.271683] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:38.452 [2024-12-06 16:32:20.271697] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.452 [2024-12-06 16:32:20.274058] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.452 [2024-12-06 16:32:20.274157] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:38.452 pt1 00:16:38.452 16:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.452 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:38.452 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:38.452 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:38.452 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:38.452 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:38.452 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:38.452 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:38.452 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:38.452 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:38.453 16:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.453 16:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.713 malloc2 00:16:38.713 16:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.713 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:38.713 16:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.713 16:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.713 [2024-12-06 16:32:20.300798] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:38.713 [2024-12-06 16:32:20.300936] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.713 [2024-12-06 16:32:20.300981] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:38.713 [2024-12-06 16:32:20.301022] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.713 [2024-12-06 16:32:20.303517] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.713 [2024-12-06 16:32:20.303591] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:38.713 pt2 00:16:38.713 16:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.713 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:38.713 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:38.713 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:38.713 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:38.713 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:38.713 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:38.713 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:38.713 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:38.713 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:38.713 16:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.713 16:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.713 malloc3 00:16:38.713 16:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.713 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:38.714 16:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.714 16:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.714 [2024-12-06 16:32:20.333793] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:38.714 [2024-12-06 16:32:20.333906] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.714 [2024-12-06 16:32:20.333943] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:38.714 [2024-12-06 16:32:20.333972] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.714 [2024-12-06 16:32:20.336213] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.714 [2024-12-06 16:32:20.336303] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:38.714 pt3 00:16:38.714 16:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.714 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:38.714 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:38.714 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:16:38.714 16:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.714 16:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.714 [2024-12-06 16:32:20.345812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:38.714 [2024-12-06 16:32:20.347774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:38.714 [2024-12-06 16:32:20.347836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:38.714 [2024-12-06 16:32:20.348020] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:16:38.714 [2024-12-06 16:32:20.348034] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:38.714 [2024-12-06 16:32:20.348345] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:38.714 [2024-12-06 16:32:20.348776] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:16:38.714 [2024-12-06 16:32:20.348795] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:16:38.714 [2024-12-06 16:32:20.348943] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.714 16:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.714 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:38.714 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.714 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.714 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:38.714 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.714 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:38.714 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.714 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.714 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.714 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.714 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.714 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.714 16:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.714 16:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.714 16:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.714 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.714 "name": "raid_bdev1", 00:16:38.714 "uuid": "5cc86c61-3d2e-473b-8cfa-632aeb6fe4cd", 00:16:38.714 "strip_size_kb": 64, 00:16:38.714 "state": "online", 00:16:38.714 "raid_level": "raid5f", 00:16:38.714 "superblock": true, 00:16:38.714 "num_base_bdevs": 3, 00:16:38.714 "num_base_bdevs_discovered": 3, 00:16:38.714 "num_base_bdevs_operational": 3, 00:16:38.714 "base_bdevs_list": [ 00:16:38.714 { 00:16:38.714 "name": "pt1", 00:16:38.714 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:38.714 "is_configured": true, 00:16:38.714 "data_offset": 2048, 00:16:38.714 "data_size": 63488 00:16:38.714 }, 00:16:38.714 { 00:16:38.714 "name": "pt2", 00:16:38.714 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:38.714 "is_configured": true, 00:16:38.714 "data_offset": 2048, 00:16:38.714 "data_size": 63488 00:16:38.714 }, 00:16:38.714 { 00:16:38.714 "name": "pt3", 00:16:38.714 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:38.714 "is_configured": true, 00:16:38.714 "data_offset": 2048, 00:16:38.714 "data_size": 63488 00:16:38.714 } 00:16:38.714 ] 00:16:38.714 }' 00:16:38.714 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.714 16:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.974 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:38.974 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:38.974 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:38.974 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:38.974 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:38.974 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:38.974 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:38.974 16:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.974 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:38.974 16:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.234 [2024-12-06 16:32:20.813919] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:39.234 16:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.234 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:39.234 "name": "raid_bdev1", 00:16:39.234 "aliases": [ 00:16:39.234 "5cc86c61-3d2e-473b-8cfa-632aeb6fe4cd" 00:16:39.234 ], 00:16:39.234 "product_name": "Raid Volume", 00:16:39.234 "block_size": 512, 00:16:39.234 "num_blocks": 126976, 00:16:39.234 "uuid": "5cc86c61-3d2e-473b-8cfa-632aeb6fe4cd", 00:16:39.234 "assigned_rate_limits": { 00:16:39.234 "rw_ios_per_sec": 0, 00:16:39.234 "rw_mbytes_per_sec": 0, 00:16:39.234 "r_mbytes_per_sec": 0, 00:16:39.234 "w_mbytes_per_sec": 0 00:16:39.234 }, 00:16:39.234 "claimed": false, 00:16:39.234 "zoned": false, 00:16:39.234 "supported_io_types": { 00:16:39.234 "read": true, 00:16:39.234 "write": true, 00:16:39.234 "unmap": false, 00:16:39.234 "flush": false, 00:16:39.234 "reset": true, 00:16:39.234 "nvme_admin": false, 00:16:39.234 "nvme_io": false, 00:16:39.234 "nvme_io_md": false, 00:16:39.234 "write_zeroes": true, 00:16:39.234 "zcopy": false, 00:16:39.234 "get_zone_info": false, 00:16:39.234 "zone_management": false, 00:16:39.234 "zone_append": false, 00:16:39.234 "compare": false, 00:16:39.234 "compare_and_write": false, 00:16:39.234 "abort": false, 00:16:39.234 "seek_hole": false, 00:16:39.234 "seek_data": false, 00:16:39.235 "copy": false, 00:16:39.235 "nvme_iov_md": false 00:16:39.235 }, 00:16:39.235 "driver_specific": { 00:16:39.235 "raid": { 00:16:39.235 "uuid": "5cc86c61-3d2e-473b-8cfa-632aeb6fe4cd", 00:16:39.235 "strip_size_kb": 64, 00:16:39.235 "state": "online", 00:16:39.235 "raid_level": "raid5f", 00:16:39.235 "superblock": true, 00:16:39.235 "num_base_bdevs": 3, 00:16:39.235 "num_base_bdevs_discovered": 3, 00:16:39.235 "num_base_bdevs_operational": 3, 00:16:39.235 "base_bdevs_list": [ 00:16:39.235 { 00:16:39.235 "name": "pt1", 00:16:39.235 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:39.235 "is_configured": true, 00:16:39.235 "data_offset": 2048, 00:16:39.235 "data_size": 63488 00:16:39.235 }, 00:16:39.235 { 00:16:39.235 "name": "pt2", 00:16:39.235 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:39.235 "is_configured": true, 00:16:39.235 "data_offset": 2048, 00:16:39.235 "data_size": 63488 00:16:39.235 }, 00:16:39.235 { 00:16:39.235 "name": "pt3", 00:16:39.235 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:39.235 "is_configured": true, 00:16:39.235 "data_offset": 2048, 00:16:39.235 "data_size": 63488 00:16:39.235 } 00:16:39.235 ] 00:16:39.235 } 00:16:39.235 } 00:16:39.235 }' 00:16:39.235 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:39.235 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:39.235 pt2 00:16:39.235 pt3' 00:16:39.235 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.235 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:39.235 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:39.235 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:39.235 16:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.235 16:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.235 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.235 16:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.235 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:39.235 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:39.235 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:39.235 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:39.235 16:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.235 16:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.235 16:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.235 16:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.235 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:39.235 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:39.235 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:39.235 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:39.235 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.235 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.235 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.235 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.235 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:39.235 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:39.235 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:39.235 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.235 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.235 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:39.235 [2024-12-06 16:32:21.065457] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5cc86c61-3d2e-473b-8cfa-632aeb6fe4cd 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5cc86c61-3d2e-473b-8cfa-632aeb6fe4cd ']' 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.496 [2024-12-06 16:32:21.113144] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:39.496 [2024-12-06 16:32:21.113179] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:39.496 [2024-12-06 16:32:21.113275] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:39.496 [2024-12-06 16:32:21.113346] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:39.496 [2024-12-06 16:32:21.113357] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.496 [2024-12-06 16:32:21.272881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:39.496 [2024-12-06 16:32:21.274763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:39.496 [2024-12-06 16:32:21.274806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:39.496 [2024-12-06 16:32:21.274855] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:39.496 [2024-12-06 16:32:21.274901] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:39.496 [2024-12-06 16:32:21.274919] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:39.496 [2024-12-06 16:32:21.274932] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:39.496 [2024-12-06 16:32:21.274945] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:16:39.496 request: 00:16:39.496 { 00:16:39.496 "name": "raid_bdev1", 00:16:39.496 "raid_level": "raid5f", 00:16:39.496 "base_bdevs": [ 00:16:39.496 "malloc1", 00:16:39.496 "malloc2", 00:16:39.496 "malloc3" 00:16:39.496 ], 00:16:39.496 "strip_size_kb": 64, 00:16:39.496 "superblock": false, 00:16:39.496 "method": "bdev_raid_create", 00:16:39.496 "req_id": 1 00:16:39.496 } 00:16:39.496 Got JSON-RPC error response 00:16:39.496 response: 00:16:39.496 { 00:16:39.496 "code": -17, 00:16:39.496 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:39.496 } 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.496 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.496 [2024-12-06 16:32:21.328741] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:39.496 [2024-12-06 16:32:21.328864] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:39.496 [2024-12-06 16:32:21.328906] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:39.496 [2024-12-06 16:32:21.328964] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:39.496 [2024-12-06 16:32:21.331278] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:39.496 [2024-12-06 16:32:21.331350] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:39.496 [2024-12-06 16:32:21.331465] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:39.496 [2024-12-06 16:32:21.331531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:39.756 pt1 00:16:39.756 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.756 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:39.756 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:39.756 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:39.756 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:39.756 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:39.756 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:39.756 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.756 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.756 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.756 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.756 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.756 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.756 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.756 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.756 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.756 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.756 "name": "raid_bdev1", 00:16:39.756 "uuid": "5cc86c61-3d2e-473b-8cfa-632aeb6fe4cd", 00:16:39.756 "strip_size_kb": 64, 00:16:39.757 "state": "configuring", 00:16:39.757 "raid_level": "raid5f", 00:16:39.757 "superblock": true, 00:16:39.757 "num_base_bdevs": 3, 00:16:39.757 "num_base_bdevs_discovered": 1, 00:16:39.757 "num_base_bdevs_operational": 3, 00:16:39.757 "base_bdevs_list": [ 00:16:39.757 { 00:16:39.757 "name": "pt1", 00:16:39.757 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:39.757 "is_configured": true, 00:16:39.757 "data_offset": 2048, 00:16:39.757 "data_size": 63488 00:16:39.757 }, 00:16:39.757 { 00:16:39.757 "name": null, 00:16:39.757 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:39.757 "is_configured": false, 00:16:39.757 "data_offset": 2048, 00:16:39.757 "data_size": 63488 00:16:39.757 }, 00:16:39.757 { 00:16:39.757 "name": null, 00:16:39.757 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:39.757 "is_configured": false, 00:16:39.757 "data_offset": 2048, 00:16:39.757 "data_size": 63488 00:16:39.757 } 00:16:39.757 ] 00:16:39.757 }' 00:16:39.757 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.757 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.016 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:16:40.016 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:40.016 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.016 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.016 [2024-12-06 16:32:21.752113] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:40.016 [2024-12-06 16:32:21.752212] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.016 [2024-12-06 16:32:21.752244] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:40.016 [2024-12-06 16:32:21.752258] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.016 [2024-12-06 16:32:21.752696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.016 [2024-12-06 16:32:21.752718] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:40.016 [2024-12-06 16:32:21.752796] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:40.016 [2024-12-06 16:32:21.752831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:40.016 pt2 00:16:40.016 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.016 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:40.016 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.016 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.016 [2024-12-06 16:32:21.764074] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:40.016 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.016 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:40.016 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:40.017 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:40.017 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:40.017 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:40.017 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:40.017 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.017 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.017 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.017 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.017 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.017 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.017 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.017 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.017 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.017 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.017 "name": "raid_bdev1", 00:16:40.017 "uuid": "5cc86c61-3d2e-473b-8cfa-632aeb6fe4cd", 00:16:40.017 "strip_size_kb": 64, 00:16:40.017 "state": "configuring", 00:16:40.017 "raid_level": "raid5f", 00:16:40.017 "superblock": true, 00:16:40.017 "num_base_bdevs": 3, 00:16:40.017 "num_base_bdevs_discovered": 1, 00:16:40.017 "num_base_bdevs_operational": 3, 00:16:40.017 "base_bdevs_list": [ 00:16:40.017 { 00:16:40.017 "name": "pt1", 00:16:40.017 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:40.017 "is_configured": true, 00:16:40.017 "data_offset": 2048, 00:16:40.017 "data_size": 63488 00:16:40.017 }, 00:16:40.017 { 00:16:40.017 "name": null, 00:16:40.017 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:40.017 "is_configured": false, 00:16:40.017 "data_offset": 0, 00:16:40.017 "data_size": 63488 00:16:40.017 }, 00:16:40.017 { 00:16:40.017 "name": null, 00:16:40.017 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:40.017 "is_configured": false, 00:16:40.017 "data_offset": 2048, 00:16:40.017 "data_size": 63488 00:16:40.017 } 00:16:40.017 ] 00:16:40.017 }' 00:16:40.017 16:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.017 16:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.586 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:40.586 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:40.586 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:40.586 16:32:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.586 16:32:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.586 [2024-12-06 16:32:22.211317] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:40.586 [2024-12-06 16:32:22.211389] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.586 [2024-12-06 16:32:22.211414] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:40.586 [2024-12-06 16:32:22.211424] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.586 [2024-12-06 16:32:22.211859] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.586 [2024-12-06 16:32:22.211885] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:40.586 [2024-12-06 16:32:22.211967] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:40.586 [2024-12-06 16:32:22.211995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:40.586 pt2 00:16:40.586 16:32:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.586 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:40.586 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:40.586 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:40.586 16:32:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.586 16:32:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.586 [2024-12-06 16:32:22.223269] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:40.586 [2024-12-06 16:32:22.223317] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.586 [2024-12-06 16:32:22.223335] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:40.586 [2024-12-06 16:32:22.223343] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.586 [2024-12-06 16:32:22.223721] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.586 [2024-12-06 16:32:22.223744] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:40.586 [2024-12-06 16:32:22.223810] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:40.586 [2024-12-06 16:32:22.223835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:40.586 [2024-12-06 16:32:22.223964] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:16:40.586 [2024-12-06 16:32:22.223980] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:40.586 [2024-12-06 16:32:22.224224] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:40.586 [2024-12-06 16:32:22.224632] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:16:40.586 [2024-12-06 16:32:22.224653] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:16:40.586 [2024-12-06 16:32:22.224759] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:40.586 pt3 00:16:40.586 16:32:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.586 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:40.586 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:40.586 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:40.586 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:40.586 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:40.586 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:40.586 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:40.586 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:40.586 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.586 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.586 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.586 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.586 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.586 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.586 16:32:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.586 16:32:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.586 16:32:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.586 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.586 "name": "raid_bdev1", 00:16:40.586 "uuid": "5cc86c61-3d2e-473b-8cfa-632aeb6fe4cd", 00:16:40.586 "strip_size_kb": 64, 00:16:40.586 "state": "online", 00:16:40.586 "raid_level": "raid5f", 00:16:40.586 "superblock": true, 00:16:40.586 "num_base_bdevs": 3, 00:16:40.586 "num_base_bdevs_discovered": 3, 00:16:40.586 "num_base_bdevs_operational": 3, 00:16:40.586 "base_bdevs_list": [ 00:16:40.586 { 00:16:40.586 "name": "pt1", 00:16:40.586 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:40.586 "is_configured": true, 00:16:40.586 "data_offset": 2048, 00:16:40.586 "data_size": 63488 00:16:40.586 }, 00:16:40.586 { 00:16:40.586 "name": "pt2", 00:16:40.586 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:40.586 "is_configured": true, 00:16:40.586 "data_offset": 2048, 00:16:40.586 "data_size": 63488 00:16:40.586 }, 00:16:40.586 { 00:16:40.586 "name": "pt3", 00:16:40.586 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:40.587 "is_configured": true, 00:16:40.587 "data_offset": 2048, 00:16:40.587 "data_size": 63488 00:16:40.587 } 00:16:40.587 ] 00:16:40.587 }' 00:16:40.587 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.587 16:32:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.156 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:41.156 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:41.156 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:41.156 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:41.156 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:41.156 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:41.156 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:41.156 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:41.156 16:32:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.156 16:32:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.156 [2024-12-06 16:32:22.698673] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:41.156 16:32:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.156 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:41.156 "name": "raid_bdev1", 00:16:41.156 "aliases": [ 00:16:41.156 "5cc86c61-3d2e-473b-8cfa-632aeb6fe4cd" 00:16:41.156 ], 00:16:41.156 "product_name": "Raid Volume", 00:16:41.156 "block_size": 512, 00:16:41.156 "num_blocks": 126976, 00:16:41.156 "uuid": "5cc86c61-3d2e-473b-8cfa-632aeb6fe4cd", 00:16:41.156 "assigned_rate_limits": { 00:16:41.156 "rw_ios_per_sec": 0, 00:16:41.156 "rw_mbytes_per_sec": 0, 00:16:41.156 "r_mbytes_per_sec": 0, 00:16:41.156 "w_mbytes_per_sec": 0 00:16:41.156 }, 00:16:41.156 "claimed": false, 00:16:41.156 "zoned": false, 00:16:41.156 "supported_io_types": { 00:16:41.156 "read": true, 00:16:41.156 "write": true, 00:16:41.156 "unmap": false, 00:16:41.156 "flush": false, 00:16:41.156 "reset": true, 00:16:41.156 "nvme_admin": false, 00:16:41.156 "nvme_io": false, 00:16:41.156 "nvme_io_md": false, 00:16:41.156 "write_zeroes": true, 00:16:41.156 "zcopy": false, 00:16:41.156 "get_zone_info": false, 00:16:41.156 "zone_management": false, 00:16:41.156 "zone_append": false, 00:16:41.156 "compare": false, 00:16:41.156 "compare_and_write": false, 00:16:41.156 "abort": false, 00:16:41.156 "seek_hole": false, 00:16:41.156 "seek_data": false, 00:16:41.156 "copy": false, 00:16:41.156 "nvme_iov_md": false 00:16:41.156 }, 00:16:41.156 "driver_specific": { 00:16:41.156 "raid": { 00:16:41.156 "uuid": "5cc86c61-3d2e-473b-8cfa-632aeb6fe4cd", 00:16:41.156 "strip_size_kb": 64, 00:16:41.156 "state": "online", 00:16:41.156 "raid_level": "raid5f", 00:16:41.156 "superblock": true, 00:16:41.156 "num_base_bdevs": 3, 00:16:41.156 "num_base_bdevs_discovered": 3, 00:16:41.156 "num_base_bdevs_operational": 3, 00:16:41.156 "base_bdevs_list": [ 00:16:41.156 { 00:16:41.156 "name": "pt1", 00:16:41.156 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:41.156 "is_configured": true, 00:16:41.156 "data_offset": 2048, 00:16:41.156 "data_size": 63488 00:16:41.156 }, 00:16:41.156 { 00:16:41.156 "name": "pt2", 00:16:41.156 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:41.156 "is_configured": true, 00:16:41.156 "data_offset": 2048, 00:16:41.156 "data_size": 63488 00:16:41.156 }, 00:16:41.156 { 00:16:41.156 "name": "pt3", 00:16:41.156 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:41.156 "is_configured": true, 00:16:41.156 "data_offset": 2048, 00:16:41.156 "data_size": 63488 00:16:41.156 } 00:16:41.156 ] 00:16:41.156 } 00:16:41.156 } 00:16:41.156 }' 00:16:41.156 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:41.156 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:41.156 pt2 00:16:41.156 pt3' 00:16:41.156 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:41.156 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:41.156 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:41.156 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:41.156 16:32:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.156 16:32:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.157 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:41.157 16:32:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.157 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:41.157 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:41.157 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:41.157 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:41.157 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:41.157 16:32:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.157 16:32:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.157 16:32:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.157 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:41.157 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:41.157 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:41.157 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:41.157 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:41.157 16:32:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.157 16:32:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.157 16:32:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.157 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:41.157 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:41.157 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:41.157 16:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:41.157 16:32:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.157 16:32:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.157 [2024-12-06 16:32:22.982150] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:41.416 16:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.416 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5cc86c61-3d2e-473b-8cfa-632aeb6fe4cd '!=' 5cc86c61-3d2e-473b-8cfa-632aeb6fe4cd ']' 00:16:41.416 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:16:41.416 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:41.416 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:41.416 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:41.416 16:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.416 16:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.416 [2024-12-06 16:32:23.021926] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:41.416 16:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.416 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:41.416 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:41.416 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:41.416 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:41.416 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:41.416 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:41.416 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.416 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.416 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.416 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.416 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.416 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.416 16:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.416 16:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.416 16:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.416 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.416 "name": "raid_bdev1", 00:16:41.416 "uuid": "5cc86c61-3d2e-473b-8cfa-632aeb6fe4cd", 00:16:41.416 "strip_size_kb": 64, 00:16:41.416 "state": "online", 00:16:41.416 "raid_level": "raid5f", 00:16:41.416 "superblock": true, 00:16:41.416 "num_base_bdevs": 3, 00:16:41.416 "num_base_bdevs_discovered": 2, 00:16:41.416 "num_base_bdevs_operational": 2, 00:16:41.416 "base_bdevs_list": [ 00:16:41.416 { 00:16:41.416 "name": null, 00:16:41.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.416 "is_configured": false, 00:16:41.416 "data_offset": 0, 00:16:41.416 "data_size": 63488 00:16:41.416 }, 00:16:41.416 { 00:16:41.416 "name": "pt2", 00:16:41.416 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:41.416 "is_configured": true, 00:16:41.416 "data_offset": 2048, 00:16:41.416 "data_size": 63488 00:16:41.416 }, 00:16:41.416 { 00:16:41.416 "name": "pt3", 00:16:41.416 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:41.416 "is_configured": true, 00:16:41.416 "data_offset": 2048, 00:16:41.416 "data_size": 63488 00:16:41.416 } 00:16:41.416 ] 00:16:41.416 }' 00:16:41.416 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.416 16:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.676 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:41.676 16:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.676 16:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.676 [2024-12-06 16:32:23.421270] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:41.676 [2024-12-06 16:32:23.421306] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:41.676 [2024-12-06 16:32:23.421384] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:41.676 [2024-12-06 16:32:23.421443] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:41.676 [2024-12-06 16:32:23.421456] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:16:41.676 16:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.676 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.676 16:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.676 16:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.676 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:41.676 16:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.676 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:41.676 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:41.676 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:41.676 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:41.676 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:41.676 16:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.676 16:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.676 16:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.676 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:41.676 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:41.676 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:41.676 16:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.676 16:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.676 16:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.676 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:41.676 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:41.676 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:41.676 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:41.676 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:41.676 16:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.676 16:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.676 [2024-12-06 16:32:23.509083] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:41.676 [2024-12-06 16:32:23.509143] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.676 [2024-12-06 16:32:23.509163] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:16:41.676 [2024-12-06 16:32:23.509173] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.676 [2024-12-06 16:32:23.511480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.676 [2024-12-06 16:32:23.511515] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:41.676 [2024-12-06 16:32:23.511588] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:41.676 [2024-12-06 16:32:23.511621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:41.936 pt2 00:16:41.936 16:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.936 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:16:41.936 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:41.936 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:41.936 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:41.936 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:41.936 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:41.936 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.936 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.936 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.936 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.936 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.936 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.936 16:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.936 16:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.936 16:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.936 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.936 "name": "raid_bdev1", 00:16:41.936 "uuid": "5cc86c61-3d2e-473b-8cfa-632aeb6fe4cd", 00:16:41.936 "strip_size_kb": 64, 00:16:41.936 "state": "configuring", 00:16:41.936 "raid_level": "raid5f", 00:16:41.936 "superblock": true, 00:16:41.936 "num_base_bdevs": 3, 00:16:41.936 "num_base_bdevs_discovered": 1, 00:16:41.936 "num_base_bdevs_operational": 2, 00:16:41.936 "base_bdevs_list": [ 00:16:41.936 { 00:16:41.936 "name": null, 00:16:41.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.936 "is_configured": false, 00:16:41.936 "data_offset": 2048, 00:16:41.936 "data_size": 63488 00:16:41.936 }, 00:16:41.936 { 00:16:41.936 "name": "pt2", 00:16:41.936 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:41.936 "is_configured": true, 00:16:41.936 "data_offset": 2048, 00:16:41.936 "data_size": 63488 00:16:41.936 }, 00:16:41.936 { 00:16:41.936 "name": null, 00:16:41.936 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:41.936 "is_configured": false, 00:16:41.936 "data_offset": 2048, 00:16:41.936 "data_size": 63488 00:16:41.936 } 00:16:41.936 ] 00:16:41.936 }' 00:16:41.936 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.936 16:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.196 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:42.196 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:42.196 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:16:42.196 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:42.196 16:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.196 16:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.196 [2024-12-06 16:32:23.964342] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:42.196 [2024-12-06 16:32:23.964419] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.196 [2024-12-06 16:32:23.964449] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:42.196 [2024-12-06 16:32:23.964459] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.196 [2024-12-06 16:32:23.964936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.196 [2024-12-06 16:32:23.964964] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:42.196 [2024-12-06 16:32:23.965050] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:42.196 [2024-12-06 16:32:23.965084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:42.196 [2024-12-06 16:32:23.965192] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:16:42.196 [2024-12-06 16:32:23.965222] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:42.196 [2024-12-06 16:32:23.965515] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:42.196 [2024-12-06 16:32:23.966014] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:16:42.196 [2024-12-06 16:32:23.966039] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:16:42.196 [2024-12-06 16:32:23.966304] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:42.196 pt3 00:16:42.196 16:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.196 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:42.196 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.196 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:42.196 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.196 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.196 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:42.196 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.196 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.196 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.196 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.196 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.196 16:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.196 16:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.196 16:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.196 16:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.196 16:32:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.196 "name": "raid_bdev1", 00:16:42.196 "uuid": "5cc86c61-3d2e-473b-8cfa-632aeb6fe4cd", 00:16:42.196 "strip_size_kb": 64, 00:16:42.196 "state": "online", 00:16:42.196 "raid_level": "raid5f", 00:16:42.196 "superblock": true, 00:16:42.196 "num_base_bdevs": 3, 00:16:42.196 "num_base_bdevs_discovered": 2, 00:16:42.196 "num_base_bdevs_operational": 2, 00:16:42.196 "base_bdevs_list": [ 00:16:42.196 { 00:16:42.196 "name": null, 00:16:42.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.196 "is_configured": false, 00:16:42.196 "data_offset": 2048, 00:16:42.196 "data_size": 63488 00:16:42.196 }, 00:16:42.196 { 00:16:42.196 "name": "pt2", 00:16:42.196 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:42.196 "is_configured": true, 00:16:42.196 "data_offset": 2048, 00:16:42.196 "data_size": 63488 00:16:42.196 }, 00:16:42.196 { 00:16:42.196 "name": "pt3", 00:16:42.196 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:42.196 "is_configured": true, 00:16:42.196 "data_offset": 2048, 00:16:42.196 "data_size": 63488 00:16:42.196 } 00:16:42.196 ] 00:16:42.196 }' 00:16:42.196 16:32:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.196 16:32:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.767 16:32:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:42.767 16:32:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.767 16:32:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.767 [2024-12-06 16:32:24.423569] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:42.767 [2024-12-06 16:32:24.423608] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:42.767 [2024-12-06 16:32:24.423705] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:42.767 [2024-12-06 16:32:24.423788] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:42.767 [2024-12-06 16:32:24.423807] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:16:42.768 16:32:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.768 16:32:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.768 16:32:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.768 16:32:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.768 16:32:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:42.768 16:32:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.768 16:32:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:42.768 16:32:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:42.768 16:32:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:16:42.768 16:32:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:16:42.768 16:32:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:16:42.768 16:32:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.768 16:32:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.768 16:32:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.768 16:32:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:42.768 16:32:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.768 16:32:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.768 [2024-12-06 16:32:24.495441] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:42.768 [2024-12-06 16:32:24.495530] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.768 [2024-12-06 16:32:24.495549] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:42.768 [2024-12-06 16:32:24.495560] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.768 [2024-12-06 16:32:24.498054] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.768 [2024-12-06 16:32:24.498097] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:42.768 [2024-12-06 16:32:24.498199] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:42.768 [2024-12-06 16:32:24.498262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:42.768 [2024-12-06 16:32:24.498392] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:42.768 [2024-12-06 16:32:24.498431] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:42.768 [2024-12-06 16:32:24.498452] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:16:42.768 [2024-12-06 16:32:24.498488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:42.768 pt1 00:16:42.768 16:32:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.768 16:32:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:16:42.768 16:32:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:16:42.768 16:32:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.768 16:32:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:42.768 16:32:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.768 16:32:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.768 16:32:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:42.768 16:32:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.768 16:32:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.768 16:32:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.768 16:32:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.768 16:32:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.768 16:32:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.768 16:32:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.768 16:32:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.768 16:32:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.768 16:32:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.768 "name": "raid_bdev1", 00:16:42.768 "uuid": "5cc86c61-3d2e-473b-8cfa-632aeb6fe4cd", 00:16:42.768 "strip_size_kb": 64, 00:16:42.768 "state": "configuring", 00:16:42.768 "raid_level": "raid5f", 00:16:42.768 "superblock": true, 00:16:42.768 "num_base_bdevs": 3, 00:16:42.768 "num_base_bdevs_discovered": 1, 00:16:42.768 "num_base_bdevs_operational": 2, 00:16:42.768 "base_bdevs_list": [ 00:16:42.768 { 00:16:42.768 "name": null, 00:16:42.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.768 "is_configured": false, 00:16:42.768 "data_offset": 2048, 00:16:42.768 "data_size": 63488 00:16:42.768 }, 00:16:42.768 { 00:16:42.768 "name": "pt2", 00:16:42.768 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:42.768 "is_configured": true, 00:16:42.768 "data_offset": 2048, 00:16:42.768 "data_size": 63488 00:16:42.768 }, 00:16:42.768 { 00:16:42.768 "name": null, 00:16:42.768 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:42.768 "is_configured": false, 00:16:42.768 "data_offset": 2048, 00:16:42.768 "data_size": 63488 00:16:42.768 } 00:16:42.768 ] 00:16:42.768 }' 00:16:42.768 16:32:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.768 16:32:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.338 16:32:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:43.338 16:32:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:43.338 16:32:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.338 16:32:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.338 16:32:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.338 16:32:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:43.338 16:32:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:43.339 16:32:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.339 16:32:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.339 [2024-12-06 16:32:24.990561] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:43.339 [2024-12-06 16:32:24.990629] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.339 [2024-12-06 16:32:24.990649] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:43.339 [2024-12-06 16:32:24.990660] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.339 [2024-12-06 16:32:24.991129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.339 [2024-12-06 16:32:24.991166] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:43.339 [2024-12-06 16:32:24.991259] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:43.339 [2024-12-06 16:32:24.991296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:43.339 [2024-12-06 16:32:24.991398] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:16:43.339 [2024-12-06 16:32:24.991418] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:43.339 [2024-12-06 16:32:24.991675] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:43.339 [2024-12-06 16:32:24.992193] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:16:43.339 [2024-12-06 16:32:24.992224] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:16:43.339 [2024-12-06 16:32:24.992399] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:43.339 pt3 00:16:43.339 16:32:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.339 16:32:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:43.339 16:32:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:43.339 16:32:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:43.339 16:32:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:43.339 16:32:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.339 16:32:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:43.339 16:32:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.339 16:32:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.339 16:32:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.339 16:32:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.339 16:32:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.339 16:32:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.339 16:32:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.339 16:32:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.339 16:32:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.339 16:32:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.339 "name": "raid_bdev1", 00:16:43.339 "uuid": "5cc86c61-3d2e-473b-8cfa-632aeb6fe4cd", 00:16:43.339 "strip_size_kb": 64, 00:16:43.339 "state": "online", 00:16:43.339 "raid_level": "raid5f", 00:16:43.339 "superblock": true, 00:16:43.339 "num_base_bdevs": 3, 00:16:43.339 "num_base_bdevs_discovered": 2, 00:16:43.339 "num_base_bdevs_operational": 2, 00:16:43.339 "base_bdevs_list": [ 00:16:43.339 { 00:16:43.339 "name": null, 00:16:43.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.339 "is_configured": false, 00:16:43.339 "data_offset": 2048, 00:16:43.339 "data_size": 63488 00:16:43.339 }, 00:16:43.339 { 00:16:43.339 "name": "pt2", 00:16:43.339 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:43.339 "is_configured": true, 00:16:43.339 "data_offset": 2048, 00:16:43.339 "data_size": 63488 00:16:43.339 }, 00:16:43.339 { 00:16:43.339 "name": "pt3", 00:16:43.339 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:43.339 "is_configured": true, 00:16:43.339 "data_offset": 2048, 00:16:43.339 "data_size": 63488 00:16:43.339 } 00:16:43.339 ] 00:16:43.339 }' 00:16:43.339 16:32:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.339 16:32:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.908 16:32:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:43.908 16:32:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.908 16:32:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.908 16:32:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:43.908 16:32:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.908 16:32:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:43.908 16:32:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:43.908 16:32:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:43.908 16:32:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.908 16:32:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.908 [2024-12-06 16:32:25.533940] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:43.908 16:32:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.908 16:32:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 5cc86c61-3d2e-473b-8cfa-632aeb6fe4cd '!=' 5cc86c61-3d2e-473b-8cfa-632aeb6fe4cd ']' 00:16:43.908 16:32:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 92149 00:16:43.908 16:32:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 92149 ']' 00:16:43.908 16:32:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 92149 00:16:43.908 16:32:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:16:43.908 16:32:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:43.908 16:32:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92149 00:16:43.908 killing process with pid 92149 00:16:43.908 16:32:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:43.908 16:32:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:43.908 16:32:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92149' 00:16:43.908 16:32:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 92149 00:16:43.908 16:32:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 92149 00:16:43.908 [2024-12-06 16:32:25.590689] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:43.908 [2024-12-06 16:32:25.590793] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:43.908 [2024-12-06 16:32:25.590894] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:43.908 [2024-12-06 16:32:25.590912] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:16:43.908 [2024-12-06 16:32:25.625362] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:44.167 16:32:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:44.167 00:16:44.167 real 0m6.561s 00:16:44.167 user 0m11.003s 00:16:44.167 sys 0m1.395s 00:16:44.167 16:32:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:44.167 16:32:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.167 ************************************ 00:16:44.168 END TEST raid5f_superblock_test 00:16:44.168 ************************************ 00:16:44.168 16:32:25 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:16:44.168 16:32:25 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:16:44.168 16:32:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:44.168 16:32:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:44.168 16:32:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:44.168 ************************************ 00:16:44.168 START TEST raid5f_rebuild_test 00:16:44.168 ************************************ 00:16:44.168 16:32:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:16:44.168 16:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:44.168 16:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:16:44.168 16:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:44.168 16:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:44.168 16:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:44.168 16:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:44.168 16:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:44.168 16:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:44.168 16:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:44.168 16:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:44.168 16:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:44.168 16:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:44.168 16:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:44.168 16:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:44.168 16:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:44.168 16:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:44.168 16:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:44.168 16:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:44.168 16:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:44.168 16:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:44.168 16:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:44.168 16:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:44.168 16:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:44.168 16:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:44.168 16:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:44.168 16:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:44.168 16:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:44.168 16:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:44.168 16:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=92582 00:16:44.168 16:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:44.168 16:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 92582 00:16:44.168 16:32:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 92582 ']' 00:16:44.168 16:32:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.168 16:32:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:44.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.168 16:32:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.168 16:32:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:44.168 16:32:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.427 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:44.427 Zero copy mechanism will not be used. 00:16:44.427 [2024-12-06 16:32:26.014582] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:16:44.427 [2024-12-06 16:32:26.014705] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92582 ] 00:16:44.427 [2024-12-06 16:32:26.179194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.427 [2024-12-06 16:32:26.205789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.427 [2024-12-06 16:32:26.248798] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:44.427 [2024-12-06 16:32:26.248838] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:45.368 16:32:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:45.368 16:32:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:16:45.368 16:32:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:45.368 16:32:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:45.368 16:32:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.368 16:32:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.368 BaseBdev1_malloc 00:16:45.368 16:32:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.368 16:32:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:45.368 16:32:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.368 16:32:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.368 [2024-12-06 16:32:26.869200] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:45.368 [2024-12-06 16:32:26.869266] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.368 [2024-12-06 16:32:26.869298] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:45.368 [2024-12-06 16:32:26.869312] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.368 [2024-12-06 16:32:26.871667] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.368 [2024-12-06 16:32:26.871704] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:45.368 BaseBdev1 00:16:45.368 16:32:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.368 16:32:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:45.368 16:32:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:45.368 16:32:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.368 16:32:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.368 BaseBdev2_malloc 00:16:45.368 16:32:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.368 16:32:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:45.368 16:32:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.368 16:32:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.368 [2024-12-06 16:32:26.898143] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:45.368 [2024-12-06 16:32:26.898193] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.368 [2024-12-06 16:32:26.898224] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:45.368 [2024-12-06 16:32:26.898233] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.368 [2024-12-06 16:32:26.900313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.368 [2024-12-06 16:32:26.900346] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:45.368 BaseBdev2 00:16:45.368 16:32:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.368 16:32:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:45.368 16:32:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:45.368 16:32:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.368 16:32:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.368 BaseBdev3_malloc 00:16:45.368 16:32:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.368 16:32:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:45.368 16:32:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.368 16:32:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.368 [2024-12-06 16:32:26.926933] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:45.368 [2024-12-06 16:32:26.926980] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.368 [2024-12-06 16:32:26.927003] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:45.368 [2024-12-06 16:32:26.927011] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.368 [2024-12-06 16:32:26.929085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.368 [2024-12-06 16:32:26.929119] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:45.368 BaseBdev3 00:16:45.368 16:32:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.368 16:32:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:45.368 16:32:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.368 16:32:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.368 spare_malloc 00:16:45.368 16:32:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.368 16:32:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:45.368 16:32:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.368 16:32:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.368 spare_delay 00:16:45.368 16:32:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.368 16:32:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:45.368 16:32:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.368 16:32:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.368 [2024-12-06 16:32:26.976122] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:45.368 [2024-12-06 16:32:26.976169] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.368 [2024-12-06 16:32:26.976190] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:45.368 [2024-12-06 16:32:26.976198] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.368 [2024-12-06 16:32:26.978288] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.368 [2024-12-06 16:32:26.978322] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:45.368 spare 00:16:45.368 16:32:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.368 16:32:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:16:45.368 16:32:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.368 16:32:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.368 [2024-12-06 16:32:26.988158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:45.368 [2024-12-06 16:32:26.989976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:45.368 [2024-12-06 16:32:26.990040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:45.368 [2024-12-06 16:32:26.990121] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:16:45.368 [2024-12-06 16:32:26.990137] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:45.368 [2024-12-06 16:32:26.990383] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:45.368 [2024-12-06 16:32:26.990823] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:16:45.368 [2024-12-06 16:32:26.990847] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:16:45.368 [2024-12-06 16:32:26.990983] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:45.368 16:32:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.368 16:32:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:45.368 16:32:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:45.369 16:32:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:45.369 16:32:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:45.369 16:32:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:45.369 16:32:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:45.369 16:32:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.369 16:32:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.369 16:32:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.369 16:32:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.369 16:32:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.369 16:32:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.369 16:32:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.369 16:32:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.369 16:32:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.369 16:32:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.369 "name": "raid_bdev1", 00:16:45.369 "uuid": "ab7c09d0-cdc3-49f7-a84e-5b16b583a036", 00:16:45.369 "strip_size_kb": 64, 00:16:45.369 "state": "online", 00:16:45.369 "raid_level": "raid5f", 00:16:45.369 "superblock": false, 00:16:45.369 "num_base_bdevs": 3, 00:16:45.369 "num_base_bdevs_discovered": 3, 00:16:45.369 "num_base_bdevs_operational": 3, 00:16:45.369 "base_bdevs_list": [ 00:16:45.369 { 00:16:45.369 "name": "BaseBdev1", 00:16:45.369 "uuid": "7d6e7cc6-3b7a-5db5-87a4-01004df24acc", 00:16:45.369 "is_configured": true, 00:16:45.369 "data_offset": 0, 00:16:45.369 "data_size": 65536 00:16:45.369 }, 00:16:45.369 { 00:16:45.369 "name": "BaseBdev2", 00:16:45.369 "uuid": "72046ee6-3120-5700-978c-25cb799b770e", 00:16:45.369 "is_configured": true, 00:16:45.369 "data_offset": 0, 00:16:45.369 "data_size": 65536 00:16:45.369 }, 00:16:45.369 { 00:16:45.369 "name": "BaseBdev3", 00:16:45.369 "uuid": "d3a23697-20c3-5e33-8770-adfaf469c108", 00:16:45.369 "is_configured": true, 00:16:45.369 "data_offset": 0, 00:16:45.369 "data_size": 65536 00:16:45.369 } 00:16:45.369 ] 00:16:45.369 }' 00:16:45.369 16:32:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.369 16:32:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.631 16:32:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:45.631 16:32:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:45.631 16:32:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.631 16:32:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.631 [2024-12-06 16:32:27.435821] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:45.631 16:32:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.631 16:32:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:16:45.631 16:32:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.631 16:32:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:45.631 16:32:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.631 16:32:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.894 16:32:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.894 16:32:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:45.894 16:32:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:45.894 16:32:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:45.894 16:32:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:45.894 16:32:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:45.894 16:32:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:45.894 16:32:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:45.894 16:32:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:45.894 16:32:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:45.894 16:32:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:45.894 16:32:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:45.894 16:32:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:45.894 16:32:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:45.894 16:32:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:45.894 [2024-12-06 16:32:27.711261] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:45.894 /dev/nbd0 00:16:46.153 16:32:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:46.153 16:32:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:46.153 16:32:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:46.153 16:32:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:46.153 16:32:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:46.153 16:32:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:46.153 16:32:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:46.153 16:32:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:46.153 16:32:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:46.153 16:32:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:46.153 16:32:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:46.153 1+0 records in 00:16:46.153 1+0 records out 00:16:46.153 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000445974 s, 9.2 MB/s 00:16:46.153 16:32:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:46.153 16:32:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:46.153 16:32:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:46.153 16:32:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:46.153 16:32:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:46.153 16:32:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:46.153 16:32:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:46.153 16:32:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:46.153 16:32:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:16:46.153 16:32:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:16:46.153 16:32:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:16:46.413 512+0 records in 00:16:46.413 512+0 records out 00:16:46.413 67108864 bytes (67 MB, 64 MiB) copied, 0.30179 s, 222 MB/s 00:16:46.413 16:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:46.413 16:32:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:46.413 16:32:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:46.413 16:32:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:46.413 16:32:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:46.413 16:32:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:46.413 16:32:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:46.672 [2024-12-06 16:32:28.284853] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:46.672 16:32:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:46.672 16:32:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:46.672 16:32:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:46.672 16:32:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:46.672 16:32:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:46.672 16:32:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:46.672 16:32:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:46.672 16:32:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:46.672 16:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:46.672 16:32:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.672 16:32:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.672 [2024-12-06 16:32:28.320891] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:46.672 16:32:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.672 16:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:46.672 16:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:46.672 16:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:46.672 16:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:46.672 16:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.672 16:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:46.672 16:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.672 16:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.672 16:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.672 16:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.672 16:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.672 16:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.673 16:32:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.673 16:32:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.673 16:32:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.673 16:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.673 "name": "raid_bdev1", 00:16:46.673 "uuid": "ab7c09d0-cdc3-49f7-a84e-5b16b583a036", 00:16:46.673 "strip_size_kb": 64, 00:16:46.673 "state": "online", 00:16:46.673 "raid_level": "raid5f", 00:16:46.673 "superblock": false, 00:16:46.673 "num_base_bdevs": 3, 00:16:46.673 "num_base_bdevs_discovered": 2, 00:16:46.673 "num_base_bdevs_operational": 2, 00:16:46.673 "base_bdevs_list": [ 00:16:46.673 { 00:16:46.673 "name": null, 00:16:46.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.673 "is_configured": false, 00:16:46.673 "data_offset": 0, 00:16:46.673 "data_size": 65536 00:16:46.673 }, 00:16:46.673 { 00:16:46.673 "name": "BaseBdev2", 00:16:46.673 "uuid": "72046ee6-3120-5700-978c-25cb799b770e", 00:16:46.673 "is_configured": true, 00:16:46.673 "data_offset": 0, 00:16:46.673 "data_size": 65536 00:16:46.673 }, 00:16:46.673 { 00:16:46.673 "name": "BaseBdev3", 00:16:46.673 "uuid": "d3a23697-20c3-5e33-8770-adfaf469c108", 00:16:46.673 "is_configured": true, 00:16:46.673 "data_offset": 0, 00:16:46.673 "data_size": 65536 00:16:46.673 } 00:16:46.673 ] 00:16:46.673 }' 00:16:46.673 16:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.673 16:32:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.242 16:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:47.242 16:32:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.242 16:32:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.242 [2024-12-06 16:32:28.784185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:47.242 [2024-12-06 16:32:28.789059] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b4e0 00:16:47.242 16:32:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.242 16:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:47.242 [2024-12-06 16:32:28.791423] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:48.181 16:32:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:48.181 16:32:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:48.181 16:32:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:48.181 16:32:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:48.181 16:32:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:48.181 16:32:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.181 16:32:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.181 16:32:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.181 16:32:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.181 16:32:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.181 16:32:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:48.181 "name": "raid_bdev1", 00:16:48.181 "uuid": "ab7c09d0-cdc3-49f7-a84e-5b16b583a036", 00:16:48.181 "strip_size_kb": 64, 00:16:48.181 "state": "online", 00:16:48.181 "raid_level": "raid5f", 00:16:48.181 "superblock": false, 00:16:48.181 "num_base_bdevs": 3, 00:16:48.181 "num_base_bdevs_discovered": 3, 00:16:48.181 "num_base_bdevs_operational": 3, 00:16:48.181 "process": { 00:16:48.181 "type": "rebuild", 00:16:48.181 "target": "spare", 00:16:48.181 "progress": { 00:16:48.181 "blocks": 20480, 00:16:48.181 "percent": 15 00:16:48.181 } 00:16:48.181 }, 00:16:48.181 "base_bdevs_list": [ 00:16:48.181 { 00:16:48.181 "name": "spare", 00:16:48.181 "uuid": "6883daf5-7a7f-509f-9529-3f657dd7856d", 00:16:48.181 "is_configured": true, 00:16:48.181 "data_offset": 0, 00:16:48.181 "data_size": 65536 00:16:48.181 }, 00:16:48.181 { 00:16:48.181 "name": "BaseBdev2", 00:16:48.181 "uuid": "72046ee6-3120-5700-978c-25cb799b770e", 00:16:48.181 "is_configured": true, 00:16:48.181 "data_offset": 0, 00:16:48.181 "data_size": 65536 00:16:48.181 }, 00:16:48.181 { 00:16:48.181 "name": "BaseBdev3", 00:16:48.181 "uuid": "d3a23697-20c3-5e33-8770-adfaf469c108", 00:16:48.181 "is_configured": true, 00:16:48.181 "data_offset": 0, 00:16:48.181 "data_size": 65536 00:16:48.181 } 00:16:48.181 ] 00:16:48.181 }' 00:16:48.181 16:32:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:48.181 16:32:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:48.181 16:32:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:48.181 16:32:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:48.181 16:32:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:48.181 16:32:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.181 16:32:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.181 [2024-12-06 16:32:29.943801] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:48.181 [2024-12-06 16:32:30.001014] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:48.181 [2024-12-06 16:32:30.001099] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:48.181 [2024-12-06 16:32:30.001118] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:48.181 [2024-12-06 16:32:30.001129] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:48.181 16:32:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.181 16:32:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:48.181 16:32:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:48.181 16:32:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:48.181 16:32:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:48.181 16:32:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:48.181 16:32:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:48.181 16:32:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.181 16:32:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.181 16:32:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.181 16:32:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.442 16:32:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.442 16:32:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.442 16:32:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.442 16:32:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.442 16:32:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.442 16:32:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.442 "name": "raid_bdev1", 00:16:48.442 "uuid": "ab7c09d0-cdc3-49f7-a84e-5b16b583a036", 00:16:48.442 "strip_size_kb": 64, 00:16:48.442 "state": "online", 00:16:48.442 "raid_level": "raid5f", 00:16:48.442 "superblock": false, 00:16:48.442 "num_base_bdevs": 3, 00:16:48.442 "num_base_bdevs_discovered": 2, 00:16:48.442 "num_base_bdevs_operational": 2, 00:16:48.442 "base_bdevs_list": [ 00:16:48.442 { 00:16:48.442 "name": null, 00:16:48.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.442 "is_configured": false, 00:16:48.442 "data_offset": 0, 00:16:48.442 "data_size": 65536 00:16:48.442 }, 00:16:48.442 { 00:16:48.442 "name": "BaseBdev2", 00:16:48.442 "uuid": "72046ee6-3120-5700-978c-25cb799b770e", 00:16:48.442 "is_configured": true, 00:16:48.442 "data_offset": 0, 00:16:48.442 "data_size": 65536 00:16:48.442 }, 00:16:48.442 { 00:16:48.442 "name": "BaseBdev3", 00:16:48.442 "uuid": "d3a23697-20c3-5e33-8770-adfaf469c108", 00:16:48.442 "is_configured": true, 00:16:48.442 "data_offset": 0, 00:16:48.442 "data_size": 65536 00:16:48.442 } 00:16:48.442 ] 00:16:48.442 }' 00:16:48.442 16:32:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.442 16:32:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.702 16:32:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:48.702 16:32:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:48.702 16:32:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:48.702 16:32:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:48.702 16:32:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:48.702 16:32:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.702 16:32:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.702 16:32:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.702 16:32:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.702 16:32:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.963 16:32:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:48.963 "name": "raid_bdev1", 00:16:48.963 "uuid": "ab7c09d0-cdc3-49f7-a84e-5b16b583a036", 00:16:48.963 "strip_size_kb": 64, 00:16:48.963 "state": "online", 00:16:48.963 "raid_level": "raid5f", 00:16:48.963 "superblock": false, 00:16:48.963 "num_base_bdevs": 3, 00:16:48.963 "num_base_bdevs_discovered": 2, 00:16:48.963 "num_base_bdevs_operational": 2, 00:16:48.963 "base_bdevs_list": [ 00:16:48.963 { 00:16:48.963 "name": null, 00:16:48.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.963 "is_configured": false, 00:16:48.963 "data_offset": 0, 00:16:48.963 "data_size": 65536 00:16:48.963 }, 00:16:48.963 { 00:16:48.963 "name": "BaseBdev2", 00:16:48.963 "uuid": "72046ee6-3120-5700-978c-25cb799b770e", 00:16:48.963 "is_configured": true, 00:16:48.963 "data_offset": 0, 00:16:48.963 "data_size": 65536 00:16:48.963 }, 00:16:48.963 { 00:16:48.963 "name": "BaseBdev3", 00:16:48.963 "uuid": "d3a23697-20c3-5e33-8770-adfaf469c108", 00:16:48.963 "is_configured": true, 00:16:48.963 "data_offset": 0, 00:16:48.963 "data_size": 65536 00:16:48.963 } 00:16:48.963 ] 00:16:48.963 }' 00:16:48.963 16:32:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:48.963 16:32:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:48.963 16:32:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:48.963 16:32:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:48.963 16:32:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:48.963 16:32:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.963 16:32:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.963 [2024-12-06 16:32:30.626539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:48.963 [2024-12-06 16:32:30.631177] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b5b0 00:16:48.963 16:32:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.963 16:32:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:48.963 [2024-12-06 16:32:30.633530] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:49.902 16:32:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:49.902 16:32:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:49.902 16:32:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:49.902 16:32:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:49.902 16:32:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:49.902 16:32:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.902 16:32:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.902 16:32:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.902 16:32:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.902 16:32:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.902 16:32:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:49.902 "name": "raid_bdev1", 00:16:49.902 "uuid": "ab7c09d0-cdc3-49f7-a84e-5b16b583a036", 00:16:49.902 "strip_size_kb": 64, 00:16:49.902 "state": "online", 00:16:49.902 "raid_level": "raid5f", 00:16:49.902 "superblock": false, 00:16:49.902 "num_base_bdevs": 3, 00:16:49.902 "num_base_bdevs_discovered": 3, 00:16:49.902 "num_base_bdevs_operational": 3, 00:16:49.902 "process": { 00:16:49.902 "type": "rebuild", 00:16:49.902 "target": "spare", 00:16:49.902 "progress": { 00:16:49.902 "blocks": 20480, 00:16:49.902 "percent": 15 00:16:49.902 } 00:16:49.902 }, 00:16:49.902 "base_bdevs_list": [ 00:16:49.902 { 00:16:49.902 "name": "spare", 00:16:49.902 "uuid": "6883daf5-7a7f-509f-9529-3f657dd7856d", 00:16:49.902 "is_configured": true, 00:16:49.902 "data_offset": 0, 00:16:49.902 "data_size": 65536 00:16:49.902 }, 00:16:49.902 { 00:16:49.902 "name": "BaseBdev2", 00:16:49.902 "uuid": "72046ee6-3120-5700-978c-25cb799b770e", 00:16:49.902 "is_configured": true, 00:16:49.902 "data_offset": 0, 00:16:49.902 "data_size": 65536 00:16:49.902 }, 00:16:49.902 { 00:16:49.902 "name": "BaseBdev3", 00:16:49.902 "uuid": "d3a23697-20c3-5e33-8770-adfaf469c108", 00:16:49.902 "is_configured": true, 00:16:49.902 "data_offset": 0, 00:16:49.902 "data_size": 65536 00:16:49.902 } 00:16:49.902 ] 00:16:49.902 }' 00:16:49.902 16:32:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:49.902 16:32:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:49.902 16:32:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:50.160 16:32:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:50.160 16:32:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:50.160 16:32:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:16:50.160 16:32:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:50.160 16:32:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=461 00:16:50.160 16:32:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:50.160 16:32:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:50.160 16:32:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:50.160 16:32:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:50.160 16:32:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:50.160 16:32:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:50.160 16:32:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.160 16:32:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.160 16:32:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.160 16:32:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.160 16:32:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.160 16:32:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:50.160 "name": "raid_bdev1", 00:16:50.160 "uuid": "ab7c09d0-cdc3-49f7-a84e-5b16b583a036", 00:16:50.160 "strip_size_kb": 64, 00:16:50.160 "state": "online", 00:16:50.160 "raid_level": "raid5f", 00:16:50.160 "superblock": false, 00:16:50.160 "num_base_bdevs": 3, 00:16:50.160 "num_base_bdevs_discovered": 3, 00:16:50.160 "num_base_bdevs_operational": 3, 00:16:50.160 "process": { 00:16:50.160 "type": "rebuild", 00:16:50.160 "target": "spare", 00:16:50.160 "progress": { 00:16:50.160 "blocks": 22528, 00:16:50.160 "percent": 17 00:16:50.160 } 00:16:50.160 }, 00:16:50.160 "base_bdevs_list": [ 00:16:50.160 { 00:16:50.160 "name": "spare", 00:16:50.160 "uuid": "6883daf5-7a7f-509f-9529-3f657dd7856d", 00:16:50.160 "is_configured": true, 00:16:50.160 "data_offset": 0, 00:16:50.160 "data_size": 65536 00:16:50.160 }, 00:16:50.160 { 00:16:50.160 "name": "BaseBdev2", 00:16:50.160 "uuid": "72046ee6-3120-5700-978c-25cb799b770e", 00:16:50.160 "is_configured": true, 00:16:50.160 "data_offset": 0, 00:16:50.160 "data_size": 65536 00:16:50.160 }, 00:16:50.160 { 00:16:50.160 "name": "BaseBdev3", 00:16:50.160 "uuid": "d3a23697-20c3-5e33-8770-adfaf469c108", 00:16:50.160 "is_configured": true, 00:16:50.160 "data_offset": 0, 00:16:50.160 "data_size": 65536 00:16:50.160 } 00:16:50.160 ] 00:16:50.160 }' 00:16:50.160 16:32:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:50.160 16:32:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:50.160 16:32:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:50.160 16:32:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:50.160 16:32:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:51.099 16:32:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:51.099 16:32:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:51.099 16:32:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:51.099 16:32:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:51.099 16:32:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:51.099 16:32:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:51.099 16:32:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.099 16:32:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.099 16:32:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.099 16:32:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.357 16:32:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.357 16:32:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:51.357 "name": "raid_bdev1", 00:16:51.357 "uuid": "ab7c09d0-cdc3-49f7-a84e-5b16b583a036", 00:16:51.357 "strip_size_kb": 64, 00:16:51.357 "state": "online", 00:16:51.357 "raid_level": "raid5f", 00:16:51.357 "superblock": false, 00:16:51.357 "num_base_bdevs": 3, 00:16:51.357 "num_base_bdevs_discovered": 3, 00:16:51.357 "num_base_bdevs_operational": 3, 00:16:51.357 "process": { 00:16:51.357 "type": "rebuild", 00:16:51.357 "target": "spare", 00:16:51.357 "progress": { 00:16:51.357 "blocks": 45056, 00:16:51.357 "percent": 34 00:16:51.357 } 00:16:51.357 }, 00:16:51.357 "base_bdevs_list": [ 00:16:51.357 { 00:16:51.357 "name": "spare", 00:16:51.357 "uuid": "6883daf5-7a7f-509f-9529-3f657dd7856d", 00:16:51.357 "is_configured": true, 00:16:51.357 "data_offset": 0, 00:16:51.357 "data_size": 65536 00:16:51.357 }, 00:16:51.357 { 00:16:51.357 "name": "BaseBdev2", 00:16:51.357 "uuid": "72046ee6-3120-5700-978c-25cb799b770e", 00:16:51.357 "is_configured": true, 00:16:51.357 "data_offset": 0, 00:16:51.357 "data_size": 65536 00:16:51.357 }, 00:16:51.357 { 00:16:51.357 "name": "BaseBdev3", 00:16:51.357 "uuid": "d3a23697-20c3-5e33-8770-adfaf469c108", 00:16:51.357 "is_configured": true, 00:16:51.357 "data_offset": 0, 00:16:51.357 "data_size": 65536 00:16:51.357 } 00:16:51.357 ] 00:16:51.357 }' 00:16:51.357 16:32:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:51.357 16:32:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:51.357 16:32:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:51.357 16:32:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:51.357 16:32:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:52.302 16:32:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:52.302 16:32:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:52.302 16:32:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:52.302 16:32:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:52.302 16:32:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:52.302 16:32:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:52.302 16:32:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.302 16:32:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.302 16:32:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.302 16:32:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.302 16:32:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.302 16:32:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:52.302 "name": "raid_bdev1", 00:16:52.302 "uuid": "ab7c09d0-cdc3-49f7-a84e-5b16b583a036", 00:16:52.302 "strip_size_kb": 64, 00:16:52.302 "state": "online", 00:16:52.302 "raid_level": "raid5f", 00:16:52.302 "superblock": false, 00:16:52.302 "num_base_bdevs": 3, 00:16:52.302 "num_base_bdevs_discovered": 3, 00:16:52.302 "num_base_bdevs_operational": 3, 00:16:52.302 "process": { 00:16:52.302 "type": "rebuild", 00:16:52.302 "target": "spare", 00:16:52.302 "progress": { 00:16:52.302 "blocks": 69632, 00:16:52.302 "percent": 53 00:16:52.302 } 00:16:52.302 }, 00:16:52.302 "base_bdevs_list": [ 00:16:52.302 { 00:16:52.302 "name": "spare", 00:16:52.302 "uuid": "6883daf5-7a7f-509f-9529-3f657dd7856d", 00:16:52.302 "is_configured": true, 00:16:52.302 "data_offset": 0, 00:16:52.302 "data_size": 65536 00:16:52.302 }, 00:16:52.302 { 00:16:52.302 "name": "BaseBdev2", 00:16:52.302 "uuid": "72046ee6-3120-5700-978c-25cb799b770e", 00:16:52.302 "is_configured": true, 00:16:52.302 "data_offset": 0, 00:16:52.302 "data_size": 65536 00:16:52.302 }, 00:16:52.302 { 00:16:52.302 "name": "BaseBdev3", 00:16:52.302 "uuid": "d3a23697-20c3-5e33-8770-adfaf469c108", 00:16:52.302 "is_configured": true, 00:16:52.302 "data_offset": 0, 00:16:52.302 "data_size": 65536 00:16:52.302 } 00:16:52.302 ] 00:16:52.302 }' 00:16:52.302 16:32:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:52.561 16:32:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:52.561 16:32:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:52.561 16:32:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:52.561 16:32:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:53.498 16:32:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:53.498 16:32:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:53.498 16:32:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:53.498 16:32:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:53.498 16:32:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:53.498 16:32:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:53.498 16:32:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.498 16:32:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.498 16:32:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.498 16:32:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.498 16:32:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.498 16:32:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:53.498 "name": "raid_bdev1", 00:16:53.498 "uuid": "ab7c09d0-cdc3-49f7-a84e-5b16b583a036", 00:16:53.498 "strip_size_kb": 64, 00:16:53.498 "state": "online", 00:16:53.498 "raid_level": "raid5f", 00:16:53.498 "superblock": false, 00:16:53.498 "num_base_bdevs": 3, 00:16:53.498 "num_base_bdevs_discovered": 3, 00:16:53.498 "num_base_bdevs_operational": 3, 00:16:53.498 "process": { 00:16:53.498 "type": "rebuild", 00:16:53.498 "target": "spare", 00:16:53.498 "progress": { 00:16:53.498 "blocks": 92160, 00:16:53.498 "percent": 70 00:16:53.498 } 00:16:53.498 }, 00:16:53.498 "base_bdevs_list": [ 00:16:53.498 { 00:16:53.498 "name": "spare", 00:16:53.498 "uuid": "6883daf5-7a7f-509f-9529-3f657dd7856d", 00:16:53.498 "is_configured": true, 00:16:53.498 "data_offset": 0, 00:16:53.498 "data_size": 65536 00:16:53.498 }, 00:16:53.498 { 00:16:53.498 "name": "BaseBdev2", 00:16:53.498 "uuid": "72046ee6-3120-5700-978c-25cb799b770e", 00:16:53.498 "is_configured": true, 00:16:53.498 "data_offset": 0, 00:16:53.498 "data_size": 65536 00:16:53.498 }, 00:16:53.498 { 00:16:53.498 "name": "BaseBdev3", 00:16:53.498 "uuid": "d3a23697-20c3-5e33-8770-adfaf469c108", 00:16:53.498 "is_configured": true, 00:16:53.498 "data_offset": 0, 00:16:53.498 "data_size": 65536 00:16:53.498 } 00:16:53.498 ] 00:16:53.498 }' 00:16:53.498 16:32:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:53.498 16:32:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:53.498 16:32:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:53.757 16:32:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:53.757 16:32:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:54.694 16:32:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:54.694 16:32:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:54.694 16:32:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:54.694 16:32:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:54.694 16:32:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:54.694 16:32:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:54.694 16:32:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.694 16:32:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.694 16:32:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.694 16:32:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.694 16:32:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.694 16:32:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:54.694 "name": "raid_bdev1", 00:16:54.694 "uuid": "ab7c09d0-cdc3-49f7-a84e-5b16b583a036", 00:16:54.694 "strip_size_kb": 64, 00:16:54.694 "state": "online", 00:16:54.694 "raid_level": "raid5f", 00:16:54.694 "superblock": false, 00:16:54.694 "num_base_bdevs": 3, 00:16:54.694 "num_base_bdevs_discovered": 3, 00:16:54.694 "num_base_bdevs_operational": 3, 00:16:54.694 "process": { 00:16:54.694 "type": "rebuild", 00:16:54.694 "target": "spare", 00:16:54.695 "progress": { 00:16:54.695 "blocks": 116736, 00:16:54.695 "percent": 89 00:16:54.695 } 00:16:54.695 }, 00:16:54.695 "base_bdevs_list": [ 00:16:54.695 { 00:16:54.695 "name": "spare", 00:16:54.695 "uuid": "6883daf5-7a7f-509f-9529-3f657dd7856d", 00:16:54.695 "is_configured": true, 00:16:54.695 "data_offset": 0, 00:16:54.695 "data_size": 65536 00:16:54.695 }, 00:16:54.695 { 00:16:54.695 "name": "BaseBdev2", 00:16:54.695 "uuid": "72046ee6-3120-5700-978c-25cb799b770e", 00:16:54.695 "is_configured": true, 00:16:54.695 "data_offset": 0, 00:16:54.695 "data_size": 65536 00:16:54.695 }, 00:16:54.695 { 00:16:54.695 "name": "BaseBdev3", 00:16:54.695 "uuid": "d3a23697-20c3-5e33-8770-adfaf469c108", 00:16:54.695 "is_configured": true, 00:16:54.695 "data_offset": 0, 00:16:54.695 "data_size": 65536 00:16:54.695 } 00:16:54.695 ] 00:16:54.695 }' 00:16:54.695 16:32:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:54.695 16:32:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:54.695 16:32:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:54.695 16:32:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:54.695 16:32:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:55.263 [2024-12-06 16:32:37.087373] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:55.263 [2024-12-06 16:32:37.087539] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:55.263 [2024-12-06 16:32:37.087620] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:55.832 16:32:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:55.832 16:32:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:55.832 16:32:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:55.832 16:32:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:55.832 16:32:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:55.832 16:32:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:55.832 16:32:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.832 16:32:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.832 16:32:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.832 16:32:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.832 16:32:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.832 16:32:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:55.832 "name": "raid_bdev1", 00:16:55.832 "uuid": "ab7c09d0-cdc3-49f7-a84e-5b16b583a036", 00:16:55.832 "strip_size_kb": 64, 00:16:55.832 "state": "online", 00:16:55.832 "raid_level": "raid5f", 00:16:55.832 "superblock": false, 00:16:55.832 "num_base_bdevs": 3, 00:16:55.832 "num_base_bdevs_discovered": 3, 00:16:55.832 "num_base_bdevs_operational": 3, 00:16:55.832 "base_bdevs_list": [ 00:16:55.832 { 00:16:55.832 "name": "spare", 00:16:55.832 "uuid": "6883daf5-7a7f-509f-9529-3f657dd7856d", 00:16:55.832 "is_configured": true, 00:16:55.832 "data_offset": 0, 00:16:55.832 "data_size": 65536 00:16:55.832 }, 00:16:55.832 { 00:16:55.832 "name": "BaseBdev2", 00:16:55.832 "uuid": "72046ee6-3120-5700-978c-25cb799b770e", 00:16:55.832 "is_configured": true, 00:16:55.832 "data_offset": 0, 00:16:55.832 "data_size": 65536 00:16:55.832 }, 00:16:55.832 { 00:16:55.832 "name": "BaseBdev3", 00:16:55.832 "uuid": "d3a23697-20c3-5e33-8770-adfaf469c108", 00:16:55.832 "is_configured": true, 00:16:55.832 "data_offset": 0, 00:16:55.832 "data_size": 65536 00:16:55.832 } 00:16:55.832 ] 00:16:55.832 }' 00:16:55.832 16:32:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:55.832 16:32:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:55.832 16:32:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:56.092 16:32:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:56.092 16:32:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:56.092 16:32:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:56.092 16:32:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:56.092 16:32:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:56.092 16:32:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:56.092 16:32:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:56.092 16:32:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.092 16:32:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.092 16:32:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.092 16:32:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.092 16:32:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.092 16:32:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:56.092 "name": "raid_bdev1", 00:16:56.092 "uuid": "ab7c09d0-cdc3-49f7-a84e-5b16b583a036", 00:16:56.092 "strip_size_kb": 64, 00:16:56.092 "state": "online", 00:16:56.092 "raid_level": "raid5f", 00:16:56.092 "superblock": false, 00:16:56.092 "num_base_bdevs": 3, 00:16:56.092 "num_base_bdevs_discovered": 3, 00:16:56.092 "num_base_bdevs_operational": 3, 00:16:56.092 "base_bdevs_list": [ 00:16:56.092 { 00:16:56.092 "name": "spare", 00:16:56.092 "uuid": "6883daf5-7a7f-509f-9529-3f657dd7856d", 00:16:56.092 "is_configured": true, 00:16:56.092 "data_offset": 0, 00:16:56.092 "data_size": 65536 00:16:56.092 }, 00:16:56.092 { 00:16:56.092 "name": "BaseBdev2", 00:16:56.092 "uuid": "72046ee6-3120-5700-978c-25cb799b770e", 00:16:56.092 "is_configured": true, 00:16:56.092 "data_offset": 0, 00:16:56.092 "data_size": 65536 00:16:56.092 }, 00:16:56.092 { 00:16:56.092 "name": "BaseBdev3", 00:16:56.092 "uuid": "d3a23697-20c3-5e33-8770-adfaf469c108", 00:16:56.092 "is_configured": true, 00:16:56.092 "data_offset": 0, 00:16:56.092 "data_size": 65536 00:16:56.092 } 00:16:56.092 ] 00:16:56.092 }' 00:16:56.092 16:32:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:56.092 16:32:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:56.092 16:32:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:56.092 16:32:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:56.092 16:32:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:56.092 16:32:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:56.092 16:32:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:56.092 16:32:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:56.092 16:32:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:56.092 16:32:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:56.092 16:32:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.092 16:32:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.092 16:32:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.092 16:32:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.092 16:32:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.092 16:32:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.092 16:32:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.092 16:32:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.092 16:32:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.092 16:32:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.092 "name": "raid_bdev1", 00:16:56.092 "uuid": "ab7c09d0-cdc3-49f7-a84e-5b16b583a036", 00:16:56.092 "strip_size_kb": 64, 00:16:56.092 "state": "online", 00:16:56.092 "raid_level": "raid5f", 00:16:56.092 "superblock": false, 00:16:56.092 "num_base_bdevs": 3, 00:16:56.092 "num_base_bdevs_discovered": 3, 00:16:56.092 "num_base_bdevs_operational": 3, 00:16:56.092 "base_bdevs_list": [ 00:16:56.092 { 00:16:56.092 "name": "spare", 00:16:56.092 "uuid": "6883daf5-7a7f-509f-9529-3f657dd7856d", 00:16:56.092 "is_configured": true, 00:16:56.092 "data_offset": 0, 00:16:56.092 "data_size": 65536 00:16:56.092 }, 00:16:56.092 { 00:16:56.092 "name": "BaseBdev2", 00:16:56.092 "uuid": "72046ee6-3120-5700-978c-25cb799b770e", 00:16:56.092 "is_configured": true, 00:16:56.092 "data_offset": 0, 00:16:56.092 "data_size": 65536 00:16:56.092 }, 00:16:56.092 { 00:16:56.092 "name": "BaseBdev3", 00:16:56.092 "uuid": "d3a23697-20c3-5e33-8770-adfaf469c108", 00:16:56.092 "is_configured": true, 00:16:56.092 "data_offset": 0, 00:16:56.092 "data_size": 65536 00:16:56.092 } 00:16:56.092 ] 00:16:56.092 }' 00:16:56.092 16:32:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.092 16:32:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.661 16:32:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:56.661 16:32:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.661 16:32:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.661 [2024-12-06 16:32:38.339382] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:56.661 [2024-12-06 16:32:38.339492] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:56.661 [2024-12-06 16:32:38.339604] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:56.661 [2024-12-06 16:32:38.339731] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:56.661 [2024-12-06 16:32:38.339784] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:16:56.661 16:32:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.661 16:32:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.661 16:32:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:56.661 16:32:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.661 16:32:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.661 16:32:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.661 16:32:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:56.661 16:32:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:56.661 16:32:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:56.661 16:32:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:56.661 16:32:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:56.661 16:32:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:56.661 16:32:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:56.661 16:32:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:56.661 16:32:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:56.661 16:32:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:56.661 16:32:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:56.661 16:32:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:56.661 16:32:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:56.920 /dev/nbd0 00:16:56.920 16:32:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:56.920 16:32:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:56.920 16:32:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:56.920 16:32:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:56.920 16:32:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:56.920 16:32:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:56.920 16:32:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:56.920 16:32:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:56.920 16:32:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:56.920 16:32:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:56.920 16:32:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:56.920 1+0 records in 00:16:56.920 1+0 records out 00:16:56.920 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257041 s, 15.9 MB/s 00:16:56.920 16:32:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:56.920 16:32:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:56.920 16:32:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:56.920 16:32:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:56.920 16:32:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:56.920 16:32:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:56.920 16:32:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:56.920 16:32:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:57.189 /dev/nbd1 00:16:57.189 16:32:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:57.189 16:32:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:57.189 16:32:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:57.189 16:32:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:57.189 16:32:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:57.189 16:32:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:57.189 16:32:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:57.189 16:32:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:57.189 16:32:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:57.189 16:32:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:57.189 16:32:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:57.189 1+0 records in 00:16:57.189 1+0 records out 00:16:57.189 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000490241 s, 8.4 MB/s 00:16:57.189 16:32:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:57.189 16:32:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:57.189 16:32:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:57.189 16:32:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:57.189 16:32:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:57.189 16:32:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:57.189 16:32:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:57.189 16:32:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:57.189 16:32:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:57.189 16:32:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:57.189 16:32:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:57.189 16:32:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:57.189 16:32:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:57.189 16:32:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:57.189 16:32:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:57.464 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:57.464 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:57.464 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:57.464 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:57.464 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:57.464 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:57.464 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:57.464 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:57.464 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:57.464 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:57.725 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:57.725 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:57.725 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:57.725 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:57.725 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:57.725 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:57.725 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:57.725 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:57.725 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:57.725 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 92582 00:16:57.725 16:32:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 92582 ']' 00:16:57.725 16:32:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 92582 00:16:57.725 16:32:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:16:57.725 16:32:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:57.725 16:32:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92582 00:16:57.725 killing process with pid 92582 00:16:57.725 Received shutdown signal, test time was about 60.000000 seconds 00:16:57.725 00:16:57.725 Latency(us) 00:16:57.725 [2024-12-06T16:32:39.564Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:57.725 [2024-12-06T16:32:39.564Z] =================================================================================================================== 00:16:57.725 [2024-12-06T16:32:39.564Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:57.725 16:32:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:57.725 16:32:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:57.725 16:32:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92582' 00:16:57.725 16:32:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 92582 00:16:57.725 [2024-12-06 16:32:39.467321] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:57.725 16:32:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 92582 00:16:57.725 [2024-12-06 16:32:39.509086] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:57.985 ************************************ 00:16:57.985 END TEST raid5f_rebuild_test 00:16:57.985 ************************************ 00:16:57.985 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:57.985 00:16:57.985 real 0m13.790s 00:16:57.985 user 0m17.426s 00:16:57.985 sys 0m1.955s 00:16:57.985 16:32:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:57.985 16:32:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.985 16:32:39 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:16:57.985 16:32:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:57.985 16:32:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:57.985 16:32:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:57.985 ************************************ 00:16:57.985 START TEST raid5f_rebuild_test_sb 00:16:57.985 ************************************ 00:16:57.985 16:32:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:16:57.985 16:32:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:57.985 16:32:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:16:57.985 16:32:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:57.985 16:32:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:57.985 16:32:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:57.985 16:32:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:57.985 16:32:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:57.985 16:32:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:57.985 16:32:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:57.985 16:32:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:57.985 16:32:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:57.986 16:32:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:57.986 16:32:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:57.986 16:32:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:57.986 16:32:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:57.986 16:32:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:57.986 16:32:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:57.986 16:32:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:57.986 16:32:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:57.986 16:32:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:57.986 16:32:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:57.986 16:32:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:57.986 16:32:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:57.986 16:32:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:57.986 16:32:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:57.986 16:32:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:57.986 16:32:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:57.986 16:32:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:57.986 16:32:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:57.986 16:32:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=93000 00:16:57.986 16:32:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 93000 00:16:57.986 16:32:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 93000 ']' 00:16:57.986 16:32:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.986 16:32:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:57.986 16:32:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:57.986 16:32:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.986 16:32:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:57.986 16:32:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.245 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:58.245 Zero copy mechanism will not be used. 00:16:58.245 [2024-12-06 16:32:39.870905] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:16:58.245 [2024-12-06 16:32:39.871056] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93000 ] 00:16:58.245 [2024-12-06 16:32:40.036258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.245 [2024-12-06 16:32:40.066020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.505 [2024-12-06 16:32:40.110054] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:58.505 [2024-12-06 16:32:40.110095] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:59.075 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:59.075 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:59.075 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:59.075 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:59.075 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.075 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.075 BaseBdev1_malloc 00:16:59.075 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.075 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:59.075 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.075 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.075 [2024-12-06 16:32:40.750554] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:59.075 [2024-12-06 16:32:40.750633] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.075 [2024-12-06 16:32:40.750670] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:59.075 [2024-12-06 16:32:40.750682] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.075 [2024-12-06 16:32:40.752843] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.075 [2024-12-06 16:32:40.752883] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:59.075 BaseBdev1 00:16:59.075 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.075 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:59.075 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:59.075 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.075 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.075 BaseBdev2_malloc 00:16:59.075 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.075 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:59.075 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.075 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.075 [2024-12-06 16:32:40.779253] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:59.075 [2024-12-06 16:32:40.779324] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.075 [2024-12-06 16:32:40.779348] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:59.075 [2024-12-06 16:32:40.779359] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.075 [2024-12-06 16:32:40.781614] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.075 [2024-12-06 16:32:40.781665] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:59.075 BaseBdev2 00:16:59.075 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.075 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:59.075 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:59.075 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.076 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.076 BaseBdev3_malloc 00:16:59.076 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.076 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:59.076 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.076 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.076 [2024-12-06 16:32:40.808463] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:59.076 [2024-12-06 16:32:40.808536] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.076 [2024-12-06 16:32:40.808563] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:59.076 [2024-12-06 16:32:40.808573] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.076 [2024-12-06 16:32:40.810717] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.076 [2024-12-06 16:32:40.810752] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:59.076 BaseBdev3 00:16:59.076 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.076 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:59.076 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.076 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.076 spare_malloc 00:16:59.076 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.076 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:59.076 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.076 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.076 spare_delay 00:16:59.076 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.076 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:59.076 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.076 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.076 [2024-12-06 16:32:40.856769] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:59.076 [2024-12-06 16:32:40.856822] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.076 [2024-12-06 16:32:40.856863] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:59.076 [2024-12-06 16:32:40.856873] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.076 [2024-12-06 16:32:40.859002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.076 [2024-12-06 16:32:40.859038] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:59.076 spare 00:16:59.076 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.076 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:16:59.076 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.076 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.076 [2024-12-06 16:32:40.868823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:59.076 [2024-12-06 16:32:40.870662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:59.076 [2024-12-06 16:32:40.870727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:59.076 [2024-12-06 16:32:40.870892] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:16:59.076 [2024-12-06 16:32:40.870913] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:59.076 [2024-12-06 16:32:40.871167] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:59.076 [2024-12-06 16:32:40.871608] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:16:59.076 [2024-12-06 16:32:40.871627] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:16:59.076 [2024-12-06 16:32:40.871741] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.076 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.076 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:59.076 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.076 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:59.076 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:59.076 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:59.076 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:59.076 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.076 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.076 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.076 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.076 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.076 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.076 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.076 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.076 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.336 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.336 "name": "raid_bdev1", 00:16:59.336 "uuid": "86a4b8d9-56a8-42a6-b248-d4f91ca4a24c", 00:16:59.336 "strip_size_kb": 64, 00:16:59.336 "state": "online", 00:16:59.336 "raid_level": "raid5f", 00:16:59.336 "superblock": true, 00:16:59.336 "num_base_bdevs": 3, 00:16:59.336 "num_base_bdevs_discovered": 3, 00:16:59.336 "num_base_bdevs_operational": 3, 00:16:59.336 "base_bdevs_list": [ 00:16:59.336 { 00:16:59.336 "name": "BaseBdev1", 00:16:59.336 "uuid": "7b3ad256-2cec-5d4a-9f59-5621618f4362", 00:16:59.336 "is_configured": true, 00:16:59.336 "data_offset": 2048, 00:16:59.336 "data_size": 63488 00:16:59.336 }, 00:16:59.336 { 00:16:59.336 "name": "BaseBdev2", 00:16:59.336 "uuid": "2e888f3a-fc74-5971-b5b6-095e942151cb", 00:16:59.336 "is_configured": true, 00:16:59.336 "data_offset": 2048, 00:16:59.336 "data_size": 63488 00:16:59.336 }, 00:16:59.336 { 00:16:59.336 "name": "BaseBdev3", 00:16:59.336 "uuid": "502bb7f4-c7e9-58e0-bc1b-ff0c33fdba93", 00:16:59.336 "is_configured": true, 00:16:59.336 "data_offset": 2048, 00:16:59.336 "data_size": 63488 00:16:59.336 } 00:16:59.336 ] 00:16:59.336 }' 00:16:59.336 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.336 16:32:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.596 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:59.596 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:59.596 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.596 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.596 [2024-12-06 16:32:41.304682] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:59.596 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.596 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:16:59.596 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.596 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:59.597 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.597 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.597 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.597 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:59.597 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:59.597 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:59.597 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:59.597 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:59.597 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:59.597 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:59.597 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:59.597 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:59.597 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:59.597 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:59.597 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:59.597 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:59.597 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:59.856 [2024-12-06 16:32:41.580122] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:59.856 /dev/nbd0 00:16:59.856 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:59.856 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:59.856 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:59.856 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:59.856 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:59.856 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:59.856 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:59.856 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:59.856 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:59.856 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:59.856 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:59.856 1+0 records in 00:16:59.856 1+0 records out 00:16:59.856 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000364121 s, 11.2 MB/s 00:16:59.856 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:59.856 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:59.856 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:59.856 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:59.856 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:59.856 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:59.856 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:59.856 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:59.856 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:16:59.856 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:16:59.856 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:17:00.115 496+0 records in 00:17:00.115 496+0 records out 00:17:00.115 65011712 bytes (65 MB, 62 MiB) copied, 0.293546 s, 221 MB/s 00:17:00.115 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:00.115 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:00.115 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:00.115 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:00.115 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:00.115 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:00.115 16:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:00.375 [2024-12-06 16:32:42.156862] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:00.375 16:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:00.375 16:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:00.375 16:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:00.375 16:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:00.375 16:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:00.375 16:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:00.375 16:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:00.375 16:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:00.375 16:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:00.375 16:32:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.375 16:32:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.375 [2024-12-06 16:32:42.194249] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:00.375 16:32:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.375 16:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:00.375 16:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:00.375 16:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:00.375 16:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:00.375 16:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:00.376 16:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:00.376 16:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.376 16:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.376 16:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.376 16:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.376 16:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.376 16:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.376 16:32:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.376 16:32:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.636 16:32:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.636 16:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.636 "name": "raid_bdev1", 00:17:00.636 "uuid": "86a4b8d9-56a8-42a6-b248-d4f91ca4a24c", 00:17:00.636 "strip_size_kb": 64, 00:17:00.636 "state": "online", 00:17:00.636 "raid_level": "raid5f", 00:17:00.636 "superblock": true, 00:17:00.636 "num_base_bdevs": 3, 00:17:00.636 "num_base_bdevs_discovered": 2, 00:17:00.636 "num_base_bdevs_operational": 2, 00:17:00.636 "base_bdevs_list": [ 00:17:00.636 { 00:17:00.636 "name": null, 00:17:00.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.636 "is_configured": false, 00:17:00.636 "data_offset": 0, 00:17:00.636 "data_size": 63488 00:17:00.636 }, 00:17:00.636 { 00:17:00.636 "name": "BaseBdev2", 00:17:00.636 "uuid": "2e888f3a-fc74-5971-b5b6-095e942151cb", 00:17:00.636 "is_configured": true, 00:17:00.636 "data_offset": 2048, 00:17:00.636 "data_size": 63488 00:17:00.636 }, 00:17:00.636 { 00:17:00.636 "name": "BaseBdev3", 00:17:00.636 "uuid": "502bb7f4-c7e9-58e0-bc1b-ff0c33fdba93", 00:17:00.636 "is_configured": true, 00:17:00.636 "data_offset": 2048, 00:17:00.636 "data_size": 63488 00:17:00.636 } 00:17:00.636 ] 00:17:00.636 }' 00:17:00.636 16:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.636 16:32:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.896 16:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:00.896 16:32:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.896 16:32:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.896 [2024-12-06 16:32:42.673430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:00.896 [2024-12-06 16:32:42.678243] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028de0 00:17:00.896 16:32:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.896 16:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:00.896 [2024-12-06 16:32:42.680455] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:02.277 16:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:02.277 16:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:02.277 16:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:02.277 16:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:02.277 16:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:02.277 16:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.277 16:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.277 16:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.277 16:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.277 16:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.277 16:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:02.277 "name": "raid_bdev1", 00:17:02.277 "uuid": "86a4b8d9-56a8-42a6-b248-d4f91ca4a24c", 00:17:02.277 "strip_size_kb": 64, 00:17:02.277 "state": "online", 00:17:02.277 "raid_level": "raid5f", 00:17:02.277 "superblock": true, 00:17:02.277 "num_base_bdevs": 3, 00:17:02.277 "num_base_bdevs_discovered": 3, 00:17:02.277 "num_base_bdevs_operational": 3, 00:17:02.277 "process": { 00:17:02.277 "type": "rebuild", 00:17:02.277 "target": "spare", 00:17:02.277 "progress": { 00:17:02.277 "blocks": 20480, 00:17:02.277 "percent": 16 00:17:02.277 } 00:17:02.277 }, 00:17:02.277 "base_bdevs_list": [ 00:17:02.277 { 00:17:02.277 "name": "spare", 00:17:02.277 "uuid": "a637c7e4-a47a-53e5-9b27-b6980c292e89", 00:17:02.277 "is_configured": true, 00:17:02.277 "data_offset": 2048, 00:17:02.277 "data_size": 63488 00:17:02.277 }, 00:17:02.277 { 00:17:02.277 "name": "BaseBdev2", 00:17:02.277 "uuid": "2e888f3a-fc74-5971-b5b6-095e942151cb", 00:17:02.277 "is_configured": true, 00:17:02.277 "data_offset": 2048, 00:17:02.277 "data_size": 63488 00:17:02.277 }, 00:17:02.277 { 00:17:02.277 "name": "BaseBdev3", 00:17:02.277 "uuid": "502bb7f4-c7e9-58e0-bc1b-ff0c33fdba93", 00:17:02.277 "is_configured": true, 00:17:02.277 "data_offset": 2048, 00:17:02.277 "data_size": 63488 00:17:02.277 } 00:17:02.277 ] 00:17:02.277 }' 00:17:02.277 16:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:02.277 16:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:02.277 16:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:02.277 16:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:02.277 16:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:02.277 16:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.277 16:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.277 [2024-12-06 16:32:43.845298] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:02.277 [2024-12-06 16:32:43.891289] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:02.277 [2024-12-06 16:32:43.891383] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:02.277 [2024-12-06 16:32:43.891402] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:02.277 [2024-12-06 16:32:43.891416] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:02.277 16:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.277 16:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:02.277 16:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:02.277 16:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:02.277 16:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:02.277 16:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:02.277 16:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:02.277 16:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.277 16:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.277 16:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.277 16:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.277 16:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.277 16:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.277 16:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.277 16:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.277 16:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.277 16:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.277 "name": "raid_bdev1", 00:17:02.277 "uuid": "86a4b8d9-56a8-42a6-b248-d4f91ca4a24c", 00:17:02.277 "strip_size_kb": 64, 00:17:02.277 "state": "online", 00:17:02.277 "raid_level": "raid5f", 00:17:02.277 "superblock": true, 00:17:02.277 "num_base_bdevs": 3, 00:17:02.277 "num_base_bdevs_discovered": 2, 00:17:02.277 "num_base_bdevs_operational": 2, 00:17:02.277 "base_bdevs_list": [ 00:17:02.277 { 00:17:02.277 "name": null, 00:17:02.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.277 "is_configured": false, 00:17:02.277 "data_offset": 0, 00:17:02.277 "data_size": 63488 00:17:02.277 }, 00:17:02.277 { 00:17:02.277 "name": "BaseBdev2", 00:17:02.277 "uuid": "2e888f3a-fc74-5971-b5b6-095e942151cb", 00:17:02.277 "is_configured": true, 00:17:02.277 "data_offset": 2048, 00:17:02.277 "data_size": 63488 00:17:02.277 }, 00:17:02.277 { 00:17:02.277 "name": "BaseBdev3", 00:17:02.277 "uuid": "502bb7f4-c7e9-58e0-bc1b-ff0c33fdba93", 00:17:02.277 "is_configured": true, 00:17:02.277 "data_offset": 2048, 00:17:02.277 "data_size": 63488 00:17:02.277 } 00:17:02.277 ] 00:17:02.277 }' 00:17:02.277 16:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.277 16:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.537 16:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:02.537 16:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:02.537 16:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:02.537 16:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:02.537 16:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:02.537 16:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.537 16:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.537 16:32:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.537 16:32:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.537 16:32:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.797 16:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:02.797 "name": "raid_bdev1", 00:17:02.797 "uuid": "86a4b8d9-56a8-42a6-b248-d4f91ca4a24c", 00:17:02.797 "strip_size_kb": 64, 00:17:02.797 "state": "online", 00:17:02.797 "raid_level": "raid5f", 00:17:02.797 "superblock": true, 00:17:02.797 "num_base_bdevs": 3, 00:17:02.797 "num_base_bdevs_discovered": 2, 00:17:02.797 "num_base_bdevs_operational": 2, 00:17:02.797 "base_bdevs_list": [ 00:17:02.797 { 00:17:02.797 "name": null, 00:17:02.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.797 "is_configured": false, 00:17:02.797 "data_offset": 0, 00:17:02.797 "data_size": 63488 00:17:02.797 }, 00:17:02.797 { 00:17:02.797 "name": "BaseBdev2", 00:17:02.797 "uuid": "2e888f3a-fc74-5971-b5b6-095e942151cb", 00:17:02.797 "is_configured": true, 00:17:02.797 "data_offset": 2048, 00:17:02.797 "data_size": 63488 00:17:02.797 }, 00:17:02.797 { 00:17:02.797 "name": "BaseBdev3", 00:17:02.797 "uuid": "502bb7f4-c7e9-58e0-bc1b-ff0c33fdba93", 00:17:02.797 "is_configured": true, 00:17:02.797 "data_offset": 2048, 00:17:02.797 "data_size": 63488 00:17:02.797 } 00:17:02.797 ] 00:17:02.797 }' 00:17:02.797 16:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:02.797 16:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:02.797 16:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:02.797 16:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:02.797 16:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:02.797 16:32:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.797 16:32:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.797 [2024-12-06 16:32:44.457048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:02.797 [2024-12-06 16:32:44.461806] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028eb0 00:17:02.797 16:32:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.797 16:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:02.797 [2024-12-06 16:32:44.464010] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:03.733 16:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:03.733 16:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:03.733 16:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:03.733 16:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:03.733 16:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:03.733 16:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.733 16:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.733 16:32:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.733 16:32:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.733 16:32:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.733 16:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:03.733 "name": "raid_bdev1", 00:17:03.733 "uuid": "86a4b8d9-56a8-42a6-b248-d4f91ca4a24c", 00:17:03.733 "strip_size_kb": 64, 00:17:03.733 "state": "online", 00:17:03.733 "raid_level": "raid5f", 00:17:03.733 "superblock": true, 00:17:03.733 "num_base_bdevs": 3, 00:17:03.733 "num_base_bdevs_discovered": 3, 00:17:03.733 "num_base_bdevs_operational": 3, 00:17:03.733 "process": { 00:17:03.733 "type": "rebuild", 00:17:03.733 "target": "spare", 00:17:03.733 "progress": { 00:17:03.733 "blocks": 20480, 00:17:03.733 "percent": 16 00:17:03.733 } 00:17:03.733 }, 00:17:03.733 "base_bdevs_list": [ 00:17:03.733 { 00:17:03.733 "name": "spare", 00:17:03.733 "uuid": "a637c7e4-a47a-53e5-9b27-b6980c292e89", 00:17:03.733 "is_configured": true, 00:17:03.733 "data_offset": 2048, 00:17:03.733 "data_size": 63488 00:17:03.733 }, 00:17:03.733 { 00:17:03.733 "name": "BaseBdev2", 00:17:03.733 "uuid": "2e888f3a-fc74-5971-b5b6-095e942151cb", 00:17:03.733 "is_configured": true, 00:17:03.733 "data_offset": 2048, 00:17:03.733 "data_size": 63488 00:17:03.733 }, 00:17:03.733 { 00:17:03.733 "name": "BaseBdev3", 00:17:03.733 "uuid": "502bb7f4-c7e9-58e0-bc1b-ff0c33fdba93", 00:17:03.733 "is_configured": true, 00:17:03.733 "data_offset": 2048, 00:17:03.733 "data_size": 63488 00:17:03.733 } 00:17:03.733 ] 00:17:03.733 }' 00:17:03.733 16:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:03.733 16:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:03.733 16:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:03.991 16:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:03.991 16:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:03.991 16:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:03.991 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:03.991 16:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:17:03.991 16:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:03.991 16:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=475 00:17:03.991 16:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:03.991 16:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:03.991 16:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:03.991 16:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:03.991 16:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:03.991 16:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:03.991 16:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.991 16:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.991 16:32:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.991 16:32:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.991 16:32:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.991 16:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:03.991 "name": "raid_bdev1", 00:17:03.991 "uuid": "86a4b8d9-56a8-42a6-b248-d4f91ca4a24c", 00:17:03.991 "strip_size_kb": 64, 00:17:03.991 "state": "online", 00:17:03.991 "raid_level": "raid5f", 00:17:03.991 "superblock": true, 00:17:03.991 "num_base_bdevs": 3, 00:17:03.991 "num_base_bdevs_discovered": 3, 00:17:03.991 "num_base_bdevs_operational": 3, 00:17:03.991 "process": { 00:17:03.991 "type": "rebuild", 00:17:03.991 "target": "spare", 00:17:03.991 "progress": { 00:17:03.991 "blocks": 22528, 00:17:03.991 "percent": 17 00:17:03.991 } 00:17:03.991 }, 00:17:03.991 "base_bdevs_list": [ 00:17:03.991 { 00:17:03.991 "name": "spare", 00:17:03.991 "uuid": "a637c7e4-a47a-53e5-9b27-b6980c292e89", 00:17:03.991 "is_configured": true, 00:17:03.991 "data_offset": 2048, 00:17:03.991 "data_size": 63488 00:17:03.991 }, 00:17:03.991 { 00:17:03.991 "name": "BaseBdev2", 00:17:03.991 "uuid": "2e888f3a-fc74-5971-b5b6-095e942151cb", 00:17:03.991 "is_configured": true, 00:17:03.991 "data_offset": 2048, 00:17:03.991 "data_size": 63488 00:17:03.991 }, 00:17:03.991 { 00:17:03.991 "name": "BaseBdev3", 00:17:03.991 "uuid": "502bb7f4-c7e9-58e0-bc1b-ff0c33fdba93", 00:17:03.991 "is_configured": true, 00:17:03.991 "data_offset": 2048, 00:17:03.991 "data_size": 63488 00:17:03.991 } 00:17:03.991 ] 00:17:03.991 }' 00:17:03.991 16:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:03.991 16:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:03.991 16:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:03.991 16:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:03.991 16:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:04.928 16:32:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:04.928 16:32:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:04.928 16:32:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:04.928 16:32:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:04.928 16:32:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:04.928 16:32:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:04.928 16:32:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.928 16:32:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.928 16:32:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.928 16:32:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.928 16:32:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.187 16:32:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:05.187 "name": "raid_bdev1", 00:17:05.187 "uuid": "86a4b8d9-56a8-42a6-b248-d4f91ca4a24c", 00:17:05.187 "strip_size_kb": 64, 00:17:05.187 "state": "online", 00:17:05.187 "raid_level": "raid5f", 00:17:05.187 "superblock": true, 00:17:05.187 "num_base_bdevs": 3, 00:17:05.187 "num_base_bdevs_discovered": 3, 00:17:05.187 "num_base_bdevs_operational": 3, 00:17:05.187 "process": { 00:17:05.187 "type": "rebuild", 00:17:05.187 "target": "spare", 00:17:05.187 "progress": { 00:17:05.187 "blocks": 45056, 00:17:05.187 "percent": 35 00:17:05.187 } 00:17:05.187 }, 00:17:05.187 "base_bdevs_list": [ 00:17:05.187 { 00:17:05.187 "name": "spare", 00:17:05.187 "uuid": "a637c7e4-a47a-53e5-9b27-b6980c292e89", 00:17:05.187 "is_configured": true, 00:17:05.187 "data_offset": 2048, 00:17:05.187 "data_size": 63488 00:17:05.187 }, 00:17:05.187 { 00:17:05.187 "name": "BaseBdev2", 00:17:05.188 "uuid": "2e888f3a-fc74-5971-b5b6-095e942151cb", 00:17:05.188 "is_configured": true, 00:17:05.188 "data_offset": 2048, 00:17:05.188 "data_size": 63488 00:17:05.188 }, 00:17:05.188 { 00:17:05.188 "name": "BaseBdev3", 00:17:05.188 "uuid": "502bb7f4-c7e9-58e0-bc1b-ff0c33fdba93", 00:17:05.188 "is_configured": true, 00:17:05.188 "data_offset": 2048, 00:17:05.188 "data_size": 63488 00:17:05.188 } 00:17:05.188 ] 00:17:05.188 }' 00:17:05.188 16:32:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:05.188 16:32:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:05.188 16:32:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:05.188 16:32:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:05.188 16:32:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:06.126 16:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:06.126 16:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:06.126 16:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.126 16:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:06.126 16:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:06.126 16:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.126 16:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.126 16:32:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.126 16:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.126 16:32:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.126 16:32:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.126 16:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:06.126 "name": "raid_bdev1", 00:17:06.126 "uuid": "86a4b8d9-56a8-42a6-b248-d4f91ca4a24c", 00:17:06.126 "strip_size_kb": 64, 00:17:06.126 "state": "online", 00:17:06.126 "raid_level": "raid5f", 00:17:06.126 "superblock": true, 00:17:06.126 "num_base_bdevs": 3, 00:17:06.126 "num_base_bdevs_discovered": 3, 00:17:06.126 "num_base_bdevs_operational": 3, 00:17:06.126 "process": { 00:17:06.126 "type": "rebuild", 00:17:06.126 "target": "spare", 00:17:06.126 "progress": { 00:17:06.126 "blocks": 67584, 00:17:06.126 "percent": 53 00:17:06.126 } 00:17:06.126 }, 00:17:06.126 "base_bdevs_list": [ 00:17:06.126 { 00:17:06.126 "name": "spare", 00:17:06.126 "uuid": "a637c7e4-a47a-53e5-9b27-b6980c292e89", 00:17:06.126 "is_configured": true, 00:17:06.126 "data_offset": 2048, 00:17:06.126 "data_size": 63488 00:17:06.126 }, 00:17:06.126 { 00:17:06.126 "name": "BaseBdev2", 00:17:06.126 "uuid": "2e888f3a-fc74-5971-b5b6-095e942151cb", 00:17:06.126 "is_configured": true, 00:17:06.126 "data_offset": 2048, 00:17:06.126 "data_size": 63488 00:17:06.126 }, 00:17:06.126 { 00:17:06.126 "name": "BaseBdev3", 00:17:06.126 "uuid": "502bb7f4-c7e9-58e0-bc1b-ff0c33fdba93", 00:17:06.126 "is_configured": true, 00:17:06.126 "data_offset": 2048, 00:17:06.126 "data_size": 63488 00:17:06.126 } 00:17:06.126 ] 00:17:06.126 }' 00:17:06.126 16:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:06.385 16:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:06.385 16:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:06.385 16:32:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:06.385 16:32:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:07.321 16:32:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:07.321 16:32:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:07.321 16:32:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:07.321 16:32:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:07.321 16:32:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:07.321 16:32:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:07.321 16:32:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.321 16:32:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.321 16:32:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.321 16:32:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.321 16:32:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.321 16:32:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:07.321 "name": "raid_bdev1", 00:17:07.321 "uuid": "86a4b8d9-56a8-42a6-b248-d4f91ca4a24c", 00:17:07.321 "strip_size_kb": 64, 00:17:07.321 "state": "online", 00:17:07.321 "raid_level": "raid5f", 00:17:07.321 "superblock": true, 00:17:07.321 "num_base_bdevs": 3, 00:17:07.321 "num_base_bdevs_discovered": 3, 00:17:07.321 "num_base_bdevs_operational": 3, 00:17:07.321 "process": { 00:17:07.321 "type": "rebuild", 00:17:07.321 "target": "spare", 00:17:07.321 "progress": { 00:17:07.321 "blocks": 92160, 00:17:07.321 "percent": 72 00:17:07.321 } 00:17:07.321 }, 00:17:07.321 "base_bdevs_list": [ 00:17:07.321 { 00:17:07.321 "name": "spare", 00:17:07.321 "uuid": "a637c7e4-a47a-53e5-9b27-b6980c292e89", 00:17:07.321 "is_configured": true, 00:17:07.321 "data_offset": 2048, 00:17:07.321 "data_size": 63488 00:17:07.321 }, 00:17:07.321 { 00:17:07.321 "name": "BaseBdev2", 00:17:07.321 "uuid": "2e888f3a-fc74-5971-b5b6-095e942151cb", 00:17:07.321 "is_configured": true, 00:17:07.321 "data_offset": 2048, 00:17:07.321 "data_size": 63488 00:17:07.321 }, 00:17:07.321 { 00:17:07.321 "name": "BaseBdev3", 00:17:07.321 "uuid": "502bb7f4-c7e9-58e0-bc1b-ff0c33fdba93", 00:17:07.321 "is_configured": true, 00:17:07.321 "data_offset": 2048, 00:17:07.321 "data_size": 63488 00:17:07.321 } 00:17:07.321 ] 00:17:07.321 }' 00:17:07.321 16:32:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:07.321 16:32:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:07.321 16:32:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:07.589 16:32:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:07.589 16:32:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:08.565 16:32:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:08.565 16:32:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:08.565 16:32:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:08.565 16:32:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:08.565 16:32:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:08.565 16:32:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.565 16:32:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.565 16:32:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.565 16:32:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.565 16:32:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.565 16:32:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.565 16:32:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.565 "name": "raid_bdev1", 00:17:08.565 "uuid": "86a4b8d9-56a8-42a6-b248-d4f91ca4a24c", 00:17:08.565 "strip_size_kb": 64, 00:17:08.565 "state": "online", 00:17:08.565 "raid_level": "raid5f", 00:17:08.565 "superblock": true, 00:17:08.565 "num_base_bdevs": 3, 00:17:08.565 "num_base_bdevs_discovered": 3, 00:17:08.565 "num_base_bdevs_operational": 3, 00:17:08.565 "process": { 00:17:08.565 "type": "rebuild", 00:17:08.565 "target": "spare", 00:17:08.565 "progress": { 00:17:08.565 "blocks": 114688, 00:17:08.565 "percent": 90 00:17:08.565 } 00:17:08.565 }, 00:17:08.565 "base_bdevs_list": [ 00:17:08.565 { 00:17:08.565 "name": "spare", 00:17:08.565 "uuid": "a637c7e4-a47a-53e5-9b27-b6980c292e89", 00:17:08.565 "is_configured": true, 00:17:08.565 "data_offset": 2048, 00:17:08.565 "data_size": 63488 00:17:08.565 }, 00:17:08.565 { 00:17:08.565 "name": "BaseBdev2", 00:17:08.565 "uuid": "2e888f3a-fc74-5971-b5b6-095e942151cb", 00:17:08.565 "is_configured": true, 00:17:08.565 "data_offset": 2048, 00:17:08.565 "data_size": 63488 00:17:08.565 }, 00:17:08.565 { 00:17:08.565 "name": "BaseBdev3", 00:17:08.565 "uuid": "502bb7f4-c7e9-58e0-bc1b-ff0c33fdba93", 00:17:08.565 "is_configured": true, 00:17:08.565 "data_offset": 2048, 00:17:08.565 "data_size": 63488 00:17:08.565 } 00:17:08.565 ] 00:17:08.565 }' 00:17:08.565 16:32:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.565 16:32:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:08.565 16:32:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.565 16:32:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:08.565 16:32:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:09.134 [2024-12-06 16:32:50.714843] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:09.134 [2024-12-06 16:32:50.714942] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:09.134 [2024-12-06 16:32:50.715111] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:09.702 16:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:09.703 16:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:09.703 16:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.703 16:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:09.703 16:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:09.703 16:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.703 16:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.703 16:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.703 16:32:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.703 16:32:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.703 16:32:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.703 16:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:09.703 "name": "raid_bdev1", 00:17:09.703 "uuid": "86a4b8d9-56a8-42a6-b248-d4f91ca4a24c", 00:17:09.703 "strip_size_kb": 64, 00:17:09.703 "state": "online", 00:17:09.703 "raid_level": "raid5f", 00:17:09.703 "superblock": true, 00:17:09.703 "num_base_bdevs": 3, 00:17:09.703 "num_base_bdevs_discovered": 3, 00:17:09.703 "num_base_bdevs_operational": 3, 00:17:09.703 "base_bdevs_list": [ 00:17:09.703 { 00:17:09.703 "name": "spare", 00:17:09.703 "uuid": "a637c7e4-a47a-53e5-9b27-b6980c292e89", 00:17:09.703 "is_configured": true, 00:17:09.703 "data_offset": 2048, 00:17:09.703 "data_size": 63488 00:17:09.703 }, 00:17:09.703 { 00:17:09.703 "name": "BaseBdev2", 00:17:09.703 "uuid": "2e888f3a-fc74-5971-b5b6-095e942151cb", 00:17:09.703 "is_configured": true, 00:17:09.703 "data_offset": 2048, 00:17:09.703 "data_size": 63488 00:17:09.703 }, 00:17:09.703 { 00:17:09.703 "name": "BaseBdev3", 00:17:09.703 "uuid": "502bb7f4-c7e9-58e0-bc1b-ff0c33fdba93", 00:17:09.703 "is_configured": true, 00:17:09.703 "data_offset": 2048, 00:17:09.703 "data_size": 63488 00:17:09.703 } 00:17:09.703 ] 00:17:09.703 }' 00:17:09.703 16:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:09.703 16:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:09.703 16:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:09.703 16:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:09.703 16:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:09.703 16:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:09.703 16:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.703 16:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:09.703 16:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:09.703 16:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.703 16:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.703 16:32:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.703 16:32:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.703 16:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.703 16:32:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.703 16:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:09.703 "name": "raid_bdev1", 00:17:09.703 "uuid": "86a4b8d9-56a8-42a6-b248-d4f91ca4a24c", 00:17:09.703 "strip_size_kb": 64, 00:17:09.703 "state": "online", 00:17:09.703 "raid_level": "raid5f", 00:17:09.703 "superblock": true, 00:17:09.703 "num_base_bdevs": 3, 00:17:09.703 "num_base_bdevs_discovered": 3, 00:17:09.703 "num_base_bdevs_operational": 3, 00:17:09.703 "base_bdevs_list": [ 00:17:09.703 { 00:17:09.703 "name": "spare", 00:17:09.703 "uuid": "a637c7e4-a47a-53e5-9b27-b6980c292e89", 00:17:09.703 "is_configured": true, 00:17:09.703 "data_offset": 2048, 00:17:09.703 "data_size": 63488 00:17:09.703 }, 00:17:09.703 { 00:17:09.703 "name": "BaseBdev2", 00:17:09.703 "uuid": "2e888f3a-fc74-5971-b5b6-095e942151cb", 00:17:09.703 "is_configured": true, 00:17:09.703 "data_offset": 2048, 00:17:09.703 "data_size": 63488 00:17:09.703 }, 00:17:09.703 { 00:17:09.703 "name": "BaseBdev3", 00:17:09.703 "uuid": "502bb7f4-c7e9-58e0-bc1b-ff0c33fdba93", 00:17:09.703 "is_configured": true, 00:17:09.703 "data_offset": 2048, 00:17:09.703 "data_size": 63488 00:17:09.703 } 00:17:09.703 ] 00:17:09.703 }' 00:17:09.703 16:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:09.977 16:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:09.977 16:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:09.977 16:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:09.977 16:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:09.977 16:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:09.977 16:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:09.977 16:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:09.977 16:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:09.977 16:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:09.977 16:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.977 16:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.977 16:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.977 16:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.977 16:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.977 16:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.977 16:32:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.977 16:32:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.977 16:32:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.977 16:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.977 "name": "raid_bdev1", 00:17:09.977 "uuid": "86a4b8d9-56a8-42a6-b248-d4f91ca4a24c", 00:17:09.977 "strip_size_kb": 64, 00:17:09.977 "state": "online", 00:17:09.977 "raid_level": "raid5f", 00:17:09.977 "superblock": true, 00:17:09.977 "num_base_bdevs": 3, 00:17:09.977 "num_base_bdevs_discovered": 3, 00:17:09.977 "num_base_bdevs_operational": 3, 00:17:09.977 "base_bdevs_list": [ 00:17:09.977 { 00:17:09.977 "name": "spare", 00:17:09.977 "uuid": "a637c7e4-a47a-53e5-9b27-b6980c292e89", 00:17:09.977 "is_configured": true, 00:17:09.977 "data_offset": 2048, 00:17:09.977 "data_size": 63488 00:17:09.977 }, 00:17:09.977 { 00:17:09.977 "name": "BaseBdev2", 00:17:09.977 "uuid": "2e888f3a-fc74-5971-b5b6-095e942151cb", 00:17:09.977 "is_configured": true, 00:17:09.977 "data_offset": 2048, 00:17:09.977 "data_size": 63488 00:17:09.977 }, 00:17:09.977 { 00:17:09.977 "name": "BaseBdev3", 00:17:09.977 "uuid": "502bb7f4-c7e9-58e0-bc1b-ff0c33fdba93", 00:17:09.977 "is_configured": true, 00:17:09.977 "data_offset": 2048, 00:17:09.977 "data_size": 63488 00:17:09.977 } 00:17:09.977 ] 00:17:09.977 }' 00:17:09.977 16:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.977 16:32:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.235 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:10.235 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.235 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.235 [2024-12-06 16:32:52.043072] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:10.235 [2024-12-06 16:32:52.043111] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:10.235 [2024-12-06 16:32:52.043197] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:10.235 [2024-12-06 16:32:52.043329] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:10.235 [2024-12-06 16:32:52.043351] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:17:10.235 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.235 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.235 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.235 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.235 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:10.235 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.494 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:10.494 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:10.494 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:10.494 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:10.495 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:10.495 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:10.495 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:10.495 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:10.495 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:10.495 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:10.495 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:10.495 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:10.495 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:10.495 /dev/nbd0 00:17:10.495 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:10.495 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:10.495 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:10.495 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:10.495 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:10.495 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:10.495 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:10.495 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:10.495 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:10.495 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:10.495 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:10.753 1+0 records in 00:17:10.753 1+0 records out 00:17:10.753 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000510854 s, 8.0 MB/s 00:17:10.753 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:10.753 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:10.753 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:10.753 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:10.753 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:10.753 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:10.753 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:10.753 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:10.753 /dev/nbd1 00:17:10.753 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:10.753 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:10.753 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:10.753 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:10.753 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:10.753 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:10.753 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:10.753 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:10.753 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:10.753 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:10.753 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:10.753 1+0 records in 00:17:10.753 1+0 records out 00:17:10.753 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430856 s, 9.5 MB/s 00:17:10.753 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:11.012 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:11.012 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:11.012 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:11.012 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:11.012 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:11.012 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:11.012 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:11.012 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:11.012 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:11.012 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:11.012 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:11.012 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:11.012 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:11.012 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:11.271 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:11.271 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:11.271 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:11.271 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:11.271 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:11.271 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:11.271 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:11.271 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:11.271 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:11.271 16:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:11.530 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:11.530 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:11.530 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:11.530 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:11.530 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:11.530 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:11.530 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:11.530 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:11.530 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:11.530 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:11.530 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.530 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.530 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.530 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:11.530 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.530 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.530 [2024-12-06 16:32:53.156351] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:11.530 [2024-12-06 16:32:53.156423] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.530 [2024-12-06 16:32:53.156452] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:11.530 [2024-12-06 16:32:53.156462] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.530 [2024-12-06 16:32:53.158815] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.530 [2024-12-06 16:32:53.158850] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:11.530 [2024-12-06 16:32:53.158951] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:11.530 [2024-12-06 16:32:53.159021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:11.530 [2024-12-06 16:32:53.159150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:11.530 [2024-12-06 16:32:53.159296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:11.530 spare 00:17:11.530 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.530 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:11.530 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.530 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.530 [2024-12-06 16:32:53.259227] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:17:11.530 [2024-12-06 16:32:53.259276] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:11.530 [2024-12-06 16:32:53.259629] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047560 00:17:11.530 [2024-12-06 16:32:53.260106] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:17:11.530 [2024-12-06 16:32:53.260130] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:17:11.530 [2024-12-06 16:32:53.260330] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:11.530 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.530 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:11.530 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:11.530 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:11.530 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:11.530 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:11.530 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:11.530 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.530 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.530 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.530 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.530 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.530 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.530 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.530 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.530 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.530 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.530 "name": "raid_bdev1", 00:17:11.530 "uuid": "86a4b8d9-56a8-42a6-b248-d4f91ca4a24c", 00:17:11.530 "strip_size_kb": 64, 00:17:11.530 "state": "online", 00:17:11.530 "raid_level": "raid5f", 00:17:11.530 "superblock": true, 00:17:11.530 "num_base_bdevs": 3, 00:17:11.530 "num_base_bdevs_discovered": 3, 00:17:11.530 "num_base_bdevs_operational": 3, 00:17:11.530 "base_bdevs_list": [ 00:17:11.530 { 00:17:11.530 "name": "spare", 00:17:11.530 "uuid": "a637c7e4-a47a-53e5-9b27-b6980c292e89", 00:17:11.530 "is_configured": true, 00:17:11.530 "data_offset": 2048, 00:17:11.530 "data_size": 63488 00:17:11.530 }, 00:17:11.530 { 00:17:11.530 "name": "BaseBdev2", 00:17:11.530 "uuid": "2e888f3a-fc74-5971-b5b6-095e942151cb", 00:17:11.530 "is_configured": true, 00:17:11.530 "data_offset": 2048, 00:17:11.530 "data_size": 63488 00:17:11.530 }, 00:17:11.530 { 00:17:11.530 "name": "BaseBdev3", 00:17:11.530 "uuid": "502bb7f4-c7e9-58e0-bc1b-ff0c33fdba93", 00:17:11.530 "is_configured": true, 00:17:11.530 "data_offset": 2048, 00:17:11.530 "data_size": 63488 00:17:11.530 } 00:17:11.530 ] 00:17:11.530 }' 00:17:11.530 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.530 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.098 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:12.098 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.098 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:12.098 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:12.098 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.098 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.098 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.098 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.098 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.098 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.098 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:12.098 "name": "raid_bdev1", 00:17:12.098 "uuid": "86a4b8d9-56a8-42a6-b248-d4f91ca4a24c", 00:17:12.098 "strip_size_kb": 64, 00:17:12.098 "state": "online", 00:17:12.098 "raid_level": "raid5f", 00:17:12.098 "superblock": true, 00:17:12.098 "num_base_bdevs": 3, 00:17:12.098 "num_base_bdevs_discovered": 3, 00:17:12.098 "num_base_bdevs_operational": 3, 00:17:12.098 "base_bdevs_list": [ 00:17:12.098 { 00:17:12.098 "name": "spare", 00:17:12.098 "uuid": "a637c7e4-a47a-53e5-9b27-b6980c292e89", 00:17:12.098 "is_configured": true, 00:17:12.098 "data_offset": 2048, 00:17:12.098 "data_size": 63488 00:17:12.098 }, 00:17:12.098 { 00:17:12.098 "name": "BaseBdev2", 00:17:12.098 "uuid": "2e888f3a-fc74-5971-b5b6-095e942151cb", 00:17:12.098 "is_configured": true, 00:17:12.098 "data_offset": 2048, 00:17:12.098 "data_size": 63488 00:17:12.098 }, 00:17:12.098 { 00:17:12.098 "name": "BaseBdev3", 00:17:12.098 "uuid": "502bb7f4-c7e9-58e0-bc1b-ff0c33fdba93", 00:17:12.098 "is_configured": true, 00:17:12.098 "data_offset": 2048, 00:17:12.098 "data_size": 63488 00:17:12.098 } 00:17:12.098 ] 00:17:12.098 }' 00:17:12.098 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:12.098 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:12.098 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:12.098 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:12.098 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:12.098 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.098 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.098 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.098 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.098 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:12.098 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:12.098 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.098 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.098 [2024-12-06 16:32:53.847336] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:12.098 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.098 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:12.098 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:12.098 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:12.098 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:12.098 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:12.098 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:12.098 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:12.098 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:12.098 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:12.098 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:12.098 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.098 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.098 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.098 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.098 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.098 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:12.098 "name": "raid_bdev1", 00:17:12.098 "uuid": "86a4b8d9-56a8-42a6-b248-d4f91ca4a24c", 00:17:12.098 "strip_size_kb": 64, 00:17:12.098 "state": "online", 00:17:12.098 "raid_level": "raid5f", 00:17:12.098 "superblock": true, 00:17:12.098 "num_base_bdevs": 3, 00:17:12.098 "num_base_bdevs_discovered": 2, 00:17:12.098 "num_base_bdevs_operational": 2, 00:17:12.098 "base_bdevs_list": [ 00:17:12.098 { 00:17:12.098 "name": null, 00:17:12.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.098 "is_configured": false, 00:17:12.098 "data_offset": 0, 00:17:12.098 "data_size": 63488 00:17:12.098 }, 00:17:12.099 { 00:17:12.099 "name": "BaseBdev2", 00:17:12.099 "uuid": "2e888f3a-fc74-5971-b5b6-095e942151cb", 00:17:12.099 "is_configured": true, 00:17:12.099 "data_offset": 2048, 00:17:12.099 "data_size": 63488 00:17:12.099 }, 00:17:12.099 { 00:17:12.099 "name": "BaseBdev3", 00:17:12.099 "uuid": "502bb7f4-c7e9-58e0-bc1b-ff0c33fdba93", 00:17:12.099 "is_configured": true, 00:17:12.099 "data_offset": 2048, 00:17:12.099 "data_size": 63488 00:17:12.099 } 00:17:12.099 ] 00:17:12.099 }' 00:17:12.099 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:12.099 16:32:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.665 16:32:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:12.665 16:32:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.665 16:32:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.665 [2024-12-06 16:32:54.322586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:12.665 [2024-12-06 16:32:54.322817] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:12.665 [2024-12-06 16:32:54.322837] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:12.665 [2024-12-06 16:32:54.322882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:12.665 [2024-12-06 16:32:54.327572] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047630 00:17:12.665 16:32:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.665 16:32:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:12.665 [2024-12-06 16:32:54.330051] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:13.603 16:32:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:13.603 16:32:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:13.603 16:32:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:13.603 16:32:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:13.603 16:32:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:13.603 16:32:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.603 16:32:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.603 16:32:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.603 16:32:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.603 16:32:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.603 16:32:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:13.603 "name": "raid_bdev1", 00:17:13.603 "uuid": "86a4b8d9-56a8-42a6-b248-d4f91ca4a24c", 00:17:13.603 "strip_size_kb": 64, 00:17:13.603 "state": "online", 00:17:13.603 "raid_level": "raid5f", 00:17:13.603 "superblock": true, 00:17:13.603 "num_base_bdevs": 3, 00:17:13.603 "num_base_bdevs_discovered": 3, 00:17:13.603 "num_base_bdevs_operational": 3, 00:17:13.603 "process": { 00:17:13.603 "type": "rebuild", 00:17:13.603 "target": "spare", 00:17:13.603 "progress": { 00:17:13.603 "blocks": 20480, 00:17:13.603 "percent": 16 00:17:13.603 } 00:17:13.603 }, 00:17:13.603 "base_bdevs_list": [ 00:17:13.603 { 00:17:13.603 "name": "spare", 00:17:13.603 "uuid": "a637c7e4-a47a-53e5-9b27-b6980c292e89", 00:17:13.603 "is_configured": true, 00:17:13.603 "data_offset": 2048, 00:17:13.603 "data_size": 63488 00:17:13.603 }, 00:17:13.603 { 00:17:13.603 "name": "BaseBdev2", 00:17:13.603 "uuid": "2e888f3a-fc74-5971-b5b6-095e942151cb", 00:17:13.603 "is_configured": true, 00:17:13.603 "data_offset": 2048, 00:17:13.603 "data_size": 63488 00:17:13.603 }, 00:17:13.603 { 00:17:13.603 "name": "BaseBdev3", 00:17:13.603 "uuid": "502bb7f4-c7e9-58e0-bc1b-ff0c33fdba93", 00:17:13.603 "is_configured": true, 00:17:13.603 "data_offset": 2048, 00:17:13.603 "data_size": 63488 00:17:13.603 } 00:17:13.603 ] 00:17:13.603 }' 00:17:13.603 16:32:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:13.604 16:32:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:13.604 16:32:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:13.863 16:32:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:13.863 16:32:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:13.863 16:32:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.863 16:32:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.863 [2024-12-06 16:32:55.477730] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:13.863 [2024-12-06 16:32:55.541132] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:13.863 [2024-12-06 16:32:55.541251] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:13.863 [2024-12-06 16:32:55.541274] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:13.863 [2024-12-06 16:32:55.541282] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:13.863 16:32:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.863 16:32:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:13.863 16:32:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:13.863 16:32:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:13.863 16:32:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:13.863 16:32:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:13.863 16:32:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:13.863 16:32:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.863 16:32:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.863 16:32:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.863 16:32:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.863 16:32:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.863 16:32:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.863 16:32:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.863 16:32:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.863 16:32:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.863 16:32:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.863 "name": "raid_bdev1", 00:17:13.863 "uuid": "86a4b8d9-56a8-42a6-b248-d4f91ca4a24c", 00:17:13.863 "strip_size_kb": 64, 00:17:13.863 "state": "online", 00:17:13.863 "raid_level": "raid5f", 00:17:13.863 "superblock": true, 00:17:13.863 "num_base_bdevs": 3, 00:17:13.863 "num_base_bdevs_discovered": 2, 00:17:13.863 "num_base_bdevs_operational": 2, 00:17:13.863 "base_bdevs_list": [ 00:17:13.863 { 00:17:13.863 "name": null, 00:17:13.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.863 "is_configured": false, 00:17:13.863 "data_offset": 0, 00:17:13.863 "data_size": 63488 00:17:13.863 }, 00:17:13.863 { 00:17:13.863 "name": "BaseBdev2", 00:17:13.863 "uuid": "2e888f3a-fc74-5971-b5b6-095e942151cb", 00:17:13.863 "is_configured": true, 00:17:13.863 "data_offset": 2048, 00:17:13.863 "data_size": 63488 00:17:13.863 }, 00:17:13.863 { 00:17:13.863 "name": "BaseBdev3", 00:17:13.863 "uuid": "502bb7f4-c7e9-58e0-bc1b-ff0c33fdba93", 00:17:13.863 "is_configured": true, 00:17:13.863 "data_offset": 2048, 00:17:13.863 "data_size": 63488 00:17:13.863 } 00:17:13.863 ] 00:17:13.863 }' 00:17:13.863 16:32:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.863 16:32:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.433 16:32:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:14.433 16:32:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.433 16:32:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.433 [2024-12-06 16:32:55.982431] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:14.433 [2024-12-06 16:32:55.982511] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:14.433 [2024-12-06 16:32:55.982538] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:17:14.433 [2024-12-06 16:32:55.982548] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:14.433 [2024-12-06 16:32:55.983024] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:14.433 [2024-12-06 16:32:55.983054] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:14.433 [2024-12-06 16:32:55.983149] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:14.433 [2024-12-06 16:32:55.983164] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:14.433 [2024-12-06 16:32:55.983181] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:14.433 [2024-12-06 16:32:55.983219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:14.433 [2024-12-06 16:32:55.987790] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:17:14.433 spare 00:17:14.433 16:32:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.433 16:32:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:14.433 [2024-12-06 16:32:55.990140] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:15.372 16:32:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:15.373 16:32:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:15.373 16:32:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:15.373 16:32:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:15.373 16:32:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:15.373 16:32:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.373 16:32:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.373 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.373 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.373 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.373 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:15.373 "name": "raid_bdev1", 00:17:15.373 "uuid": "86a4b8d9-56a8-42a6-b248-d4f91ca4a24c", 00:17:15.373 "strip_size_kb": 64, 00:17:15.373 "state": "online", 00:17:15.373 "raid_level": "raid5f", 00:17:15.373 "superblock": true, 00:17:15.373 "num_base_bdevs": 3, 00:17:15.373 "num_base_bdevs_discovered": 3, 00:17:15.373 "num_base_bdevs_operational": 3, 00:17:15.373 "process": { 00:17:15.373 "type": "rebuild", 00:17:15.373 "target": "spare", 00:17:15.373 "progress": { 00:17:15.373 "blocks": 20480, 00:17:15.373 "percent": 16 00:17:15.373 } 00:17:15.373 }, 00:17:15.373 "base_bdevs_list": [ 00:17:15.373 { 00:17:15.373 "name": "spare", 00:17:15.373 "uuid": "a637c7e4-a47a-53e5-9b27-b6980c292e89", 00:17:15.373 "is_configured": true, 00:17:15.373 "data_offset": 2048, 00:17:15.373 "data_size": 63488 00:17:15.373 }, 00:17:15.373 { 00:17:15.373 "name": "BaseBdev2", 00:17:15.373 "uuid": "2e888f3a-fc74-5971-b5b6-095e942151cb", 00:17:15.373 "is_configured": true, 00:17:15.373 "data_offset": 2048, 00:17:15.373 "data_size": 63488 00:17:15.373 }, 00:17:15.373 { 00:17:15.373 "name": "BaseBdev3", 00:17:15.373 "uuid": "502bb7f4-c7e9-58e0-bc1b-ff0c33fdba93", 00:17:15.373 "is_configured": true, 00:17:15.373 "data_offset": 2048, 00:17:15.373 "data_size": 63488 00:17:15.373 } 00:17:15.373 ] 00:17:15.373 }' 00:17:15.373 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:15.373 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:15.373 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:15.373 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:15.373 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:15.373 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.373 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.373 [2024-12-06 16:32:57.122298] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:15.373 [2024-12-06 16:32:57.199826] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:15.373 [2024-12-06 16:32:57.199893] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:15.373 [2024-12-06 16:32:57.199926] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:15.373 [2024-12-06 16:32:57.199944] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:15.633 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.633 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:15.633 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:15.633 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:15.633 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:15.633 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:15.633 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:15.633 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.633 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.633 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.633 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.633 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.633 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.633 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.633 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.633 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.633 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.633 "name": "raid_bdev1", 00:17:15.633 "uuid": "86a4b8d9-56a8-42a6-b248-d4f91ca4a24c", 00:17:15.633 "strip_size_kb": 64, 00:17:15.633 "state": "online", 00:17:15.633 "raid_level": "raid5f", 00:17:15.633 "superblock": true, 00:17:15.633 "num_base_bdevs": 3, 00:17:15.633 "num_base_bdevs_discovered": 2, 00:17:15.633 "num_base_bdevs_operational": 2, 00:17:15.633 "base_bdevs_list": [ 00:17:15.633 { 00:17:15.633 "name": null, 00:17:15.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.634 "is_configured": false, 00:17:15.634 "data_offset": 0, 00:17:15.634 "data_size": 63488 00:17:15.634 }, 00:17:15.634 { 00:17:15.634 "name": "BaseBdev2", 00:17:15.634 "uuid": "2e888f3a-fc74-5971-b5b6-095e942151cb", 00:17:15.634 "is_configured": true, 00:17:15.634 "data_offset": 2048, 00:17:15.634 "data_size": 63488 00:17:15.634 }, 00:17:15.634 { 00:17:15.634 "name": "BaseBdev3", 00:17:15.634 "uuid": "502bb7f4-c7e9-58e0-bc1b-ff0c33fdba93", 00:17:15.634 "is_configured": true, 00:17:15.634 "data_offset": 2048, 00:17:15.634 "data_size": 63488 00:17:15.634 } 00:17:15.634 ] 00:17:15.634 }' 00:17:15.634 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.634 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.894 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:15.894 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:15.894 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:15.894 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:15.894 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:15.894 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.894 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.894 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.894 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.894 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.894 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:15.894 "name": "raid_bdev1", 00:17:15.894 "uuid": "86a4b8d9-56a8-42a6-b248-d4f91ca4a24c", 00:17:15.894 "strip_size_kb": 64, 00:17:15.894 "state": "online", 00:17:15.894 "raid_level": "raid5f", 00:17:15.894 "superblock": true, 00:17:15.894 "num_base_bdevs": 3, 00:17:15.894 "num_base_bdevs_discovered": 2, 00:17:15.894 "num_base_bdevs_operational": 2, 00:17:15.894 "base_bdevs_list": [ 00:17:15.894 { 00:17:15.894 "name": null, 00:17:15.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.894 "is_configured": false, 00:17:15.894 "data_offset": 0, 00:17:15.894 "data_size": 63488 00:17:15.894 }, 00:17:15.894 { 00:17:15.894 "name": "BaseBdev2", 00:17:15.894 "uuid": "2e888f3a-fc74-5971-b5b6-095e942151cb", 00:17:15.894 "is_configured": true, 00:17:15.894 "data_offset": 2048, 00:17:15.894 "data_size": 63488 00:17:15.894 }, 00:17:15.894 { 00:17:15.894 "name": "BaseBdev3", 00:17:15.894 "uuid": "502bb7f4-c7e9-58e0-bc1b-ff0c33fdba93", 00:17:15.894 "is_configured": true, 00:17:15.894 "data_offset": 2048, 00:17:15.894 "data_size": 63488 00:17:15.894 } 00:17:15.894 ] 00:17:15.894 }' 00:17:15.894 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.154 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:16.154 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:16.154 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:16.154 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:16.154 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.154 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.154 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.154 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:16.154 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.154 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.154 [2024-12-06 16:32:57.824620] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:16.154 [2024-12-06 16:32:57.824680] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.154 [2024-12-06 16:32:57.824716] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:17:16.154 [2024-12-06 16:32:57.824726] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.154 [2024-12-06 16:32:57.825126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.154 [2024-12-06 16:32:57.825155] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:16.154 [2024-12-06 16:32:57.825247] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:16.154 [2024-12-06 16:32:57.825273] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:16.154 [2024-12-06 16:32:57.825281] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:16.154 [2024-12-06 16:32:57.825293] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:16.154 BaseBdev1 00:17:16.154 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.154 16:32:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:17.095 16:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:17.095 16:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:17.095 16:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:17.095 16:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:17.095 16:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:17.095 16:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:17.095 16:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.095 16:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.095 16:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.095 16:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.095 16:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.095 16:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.095 16:32:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.095 16:32:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.095 16:32:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.095 16:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.095 "name": "raid_bdev1", 00:17:17.095 "uuid": "86a4b8d9-56a8-42a6-b248-d4f91ca4a24c", 00:17:17.095 "strip_size_kb": 64, 00:17:17.095 "state": "online", 00:17:17.095 "raid_level": "raid5f", 00:17:17.095 "superblock": true, 00:17:17.095 "num_base_bdevs": 3, 00:17:17.095 "num_base_bdevs_discovered": 2, 00:17:17.095 "num_base_bdevs_operational": 2, 00:17:17.095 "base_bdevs_list": [ 00:17:17.095 { 00:17:17.095 "name": null, 00:17:17.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.095 "is_configured": false, 00:17:17.095 "data_offset": 0, 00:17:17.095 "data_size": 63488 00:17:17.095 }, 00:17:17.095 { 00:17:17.095 "name": "BaseBdev2", 00:17:17.095 "uuid": "2e888f3a-fc74-5971-b5b6-095e942151cb", 00:17:17.095 "is_configured": true, 00:17:17.095 "data_offset": 2048, 00:17:17.095 "data_size": 63488 00:17:17.095 }, 00:17:17.095 { 00:17:17.095 "name": "BaseBdev3", 00:17:17.095 "uuid": "502bb7f4-c7e9-58e0-bc1b-ff0c33fdba93", 00:17:17.095 "is_configured": true, 00:17:17.095 "data_offset": 2048, 00:17:17.095 "data_size": 63488 00:17:17.095 } 00:17:17.095 ] 00:17:17.095 }' 00:17:17.095 16:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.095 16:32:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.667 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:17.667 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.667 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:17.667 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:17.667 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.667 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.667 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.667 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.667 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.667 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.667 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.668 "name": "raid_bdev1", 00:17:17.668 "uuid": "86a4b8d9-56a8-42a6-b248-d4f91ca4a24c", 00:17:17.668 "strip_size_kb": 64, 00:17:17.668 "state": "online", 00:17:17.668 "raid_level": "raid5f", 00:17:17.668 "superblock": true, 00:17:17.668 "num_base_bdevs": 3, 00:17:17.668 "num_base_bdevs_discovered": 2, 00:17:17.668 "num_base_bdevs_operational": 2, 00:17:17.668 "base_bdevs_list": [ 00:17:17.668 { 00:17:17.668 "name": null, 00:17:17.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.668 "is_configured": false, 00:17:17.668 "data_offset": 0, 00:17:17.668 "data_size": 63488 00:17:17.668 }, 00:17:17.668 { 00:17:17.668 "name": "BaseBdev2", 00:17:17.668 "uuid": "2e888f3a-fc74-5971-b5b6-095e942151cb", 00:17:17.668 "is_configured": true, 00:17:17.668 "data_offset": 2048, 00:17:17.668 "data_size": 63488 00:17:17.668 }, 00:17:17.668 { 00:17:17.668 "name": "BaseBdev3", 00:17:17.668 "uuid": "502bb7f4-c7e9-58e0-bc1b-ff0c33fdba93", 00:17:17.668 "is_configured": true, 00:17:17.668 "data_offset": 2048, 00:17:17.668 "data_size": 63488 00:17:17.668 } 00:17:17.668 ] 00:17:17.668 }' 00:17:17.668 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.668 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:17.668 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.668 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:17.668 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:17.668 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:17:17.668 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:17.668 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:17.668 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:17.668 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:17.668 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:17.668 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:17.668 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.668 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.668 [2024-12-06 16:32:59.465935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:17.668 [2024-12-06 16:32:59.466109] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:17.668 [2024-12-06 16:32:59.466123] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:17.668 request: 00:17:17.668 { 00:17:17.668 "base_bdev": "BaseBdev1", 00:17:17.668 "raid_bdev": "raid_bdev1", 00:17:17.668 "method": "bdev_raid_add_base_bdev", 00:17:17.668 "req_id": 1 00:17:17.668 } 00:17:17.668 Got JSON-RPC error response 00:17:17.668 response: 00:17:17.668 { 00:17:17.668 "code": -22, 00:17:17.668 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:17.668 } 00:17:17.668 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:17.668 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:17:17.668 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:17.668 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:17.668 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:17.668 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:19.047 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:19.047 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:19.047 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:19.047 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:19.047 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:19.047 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:19.047 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.047 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.047 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.047 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.047 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.048 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.048 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.048 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.048 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.048 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.048 "name": "raid_bdev1", 00:17:19.048 "uuid": "86a4b8d9-56a8-42a6-b248-d4f91ca4a24c", 00:17:19.048 "strip_size_kb": 64, 00:17:19.048 "state": "online", 00:17:19.048 "raid_level": "raid5f", 00:17:19.048 "superblock": true, 00:17:19.048 "num_base_bdevs": 3, 00:17:19.048 "num_base_bdevs_discovered": 2, 00:17:19.048 "num_base_bdevs_operational": 2, 00:17:19.048 "base_bdevs_list": [ 00:17:19.048 { 00:17:19.048 "name": null, 00:17:19.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.048 "is_configured": false, 00:17:19.048 "data_offset": 0, 00:17:19.048 "data_size": 63488 00:17:19.048 }, 00:17:19.048 { 00:17:19.048 "name": "BaseBdev2", 00:17:19.048 "uuid": "2e888f3a-fc74-5971-b5b6-095e942151cb", 00:17:19.048 "is_configured": true, 00:17:19.048 "data_offset": 2048, 00:17:19.048 "data_size": 63488 00:17:19.048 }, 00:17:19.048 { 00:17:19.048 "name": "BaseBdev3", 00:17:19.048 "uuid": "502bb7f4-c7e9-58e0-bc1b-ff0c33fdba93", 00:17:19.048 "is_configured": true, 00:17:19.048 "data_offset": 2048, 00:17:19.048 "data_size": 63488 00:17:19.048 } 00:17:19.048 ] 00:17:19.048 }' 00:17:19.048 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.048 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.307 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:19.307 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.307 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:19.307 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:19.307 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.307 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.307 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.307 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.307 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.307 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.307 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.307 "name": "raid_bdev1", 00:17:19.307 "uuid": "86a4b8d9-56a8-42a6-b248-d4f91ca4a24c", 00:17:19.307 "strip_size_kb": 64, 00:17:19.307 "state": "online", 00:17:19.307 "raid_level": "raid5f", 00:17:19.307 "superblock": true, 00:17:19.307 "num_base_bdevs": 3, 00:17:19.307 "num_base_bdevs_discovered": 2, 00:17:19.307 "num_base_bdevs_operational": 2, 00:17:19.307 "base_bdevs_list": [ 00:17:19.307 { 00:17:19.307 "name": null, 00:17:19.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.307 "is_configured": false, 00:17:19.307 "data_offset": 0, 00:17:19.307 "data_size": 63488 00:17:19.307 }, 00:17:19.307 { 00:17:19.307 "name": "BaseBdev2", 00:17:19.307 "uuid": "2e888f3a-fc74-5971-b5b6-095e942151cb", 00:17:19.307 "is_configured": true, 00:17:19.307 "data_offset": 2048, 00:17:19.307 "data_size": 63488 00:17:19.307 }, 00:17:19.307 { 00:17:19.307 "name": "BaseBdev3", 00:17:19.307 "uuid": "502bb7f4-c7e9-58e0-bc1b-ff0c33fdba93", 00:17:19.307 "is_configured": true, 00:17:19.307 "data_offset": 2048, 00:17:19.307 "data_size": 63488 00:17:19.307 } 00:17:19.307 ] 00:17:19.307 }' 00:17:19.307 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.307 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:19.307 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.307 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:19.307 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 93000 00:17:19.307 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 93000 ']' 00:17:19.307 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 93000 00:17:19.307 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:19.307 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:19.307 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93000 00:17:19.307 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:19.307 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:19.307 killing process with pid 93000 00:17:19.307 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93000' 00:17:19.307 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 93000 00:17:19.307 Received shutdown signal, test time was about 60.000000 seconds 00:17:19.307 00:17:19.307 Latency(us) 00:17:19.307 [2024-12-06T16:33:01.146Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.307 [2024-12-06T16:33:01.146Z] =================================================================================================================== 00:17:19.307 [2024-12-06T16:33:01.146Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:19.307 [2024-12-06 16:33:01.115591] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:19.307 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 93000 00:17:19.307 [2024-12-06 16:33:01.115724] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:19.307 [2024-12-06 16:33:01.115803] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:19.307 [2024-12-06 16:33:01.115815] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:17:19.567 [2024-12-06 16:33:01.157612] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:19.567 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:19.567 00:17:19.567 real 0m21.574s 00:17:19.567 user 0m28.096s 00:17:19.567 sys 0m2.751s 00:17:19.567 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:19.567 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.567 ************************************ 00:17:19.567 END TEST raid5f_rebuild_test_sb 00:17:19.567 ************************************ 00:17:19.827 16:33:01 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:17:19.827 16:33:01 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:17:19.827 16:33:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:19.827 16:33:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:19.827 16:33:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:19.827 ************************************ 00:17:19.827 START TEST raid5f_state_function_test 00:17:19.827 ************************************ 00:17:19.827 16:33:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:17:19.827 16:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:17:19.827 16:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:17:19.827 16:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:17:19.827 16:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:19.827 16:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:19.827 16:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:19.827 16:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:19.827 16:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:19.827 16:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:19.827 16:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:19.827 16:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:19.827 16:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:19.827 16:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:19.827 16:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:19.827 16:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:19.827 16:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:17:19.827 16:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:19.827 16:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:19.827 16:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:19.827 16:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:19.827 16:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:19.827 16:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:19.827 16:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:19.827 16:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:19.827 16:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:17:19.827 16:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:19.827 16:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:19.827 16:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:17:19.827 16:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:17:19.827 16:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=93735 00:17:19.827 16:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:19.827 Process raid pid: 93735 00:17:19.827 16:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 93735' 00:17:19.827 16:33:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 93735 00:17:19.827 16:33:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 93735 ']' 00:17:19.827 16:33:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:19.827 16:33:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:19.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:19.827 16:33:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:19.827 16:33:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:19.827 16:33:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.827 [2024-12-06 16:33:01.524859] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:17:19.827 [2024-12-06 16:33:01.524984] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:20.095 [2024-12-06 16:33:01.699601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.096 [2024-12-06 16:33:01.725834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.096 [2024-12-06 16:33:01.768626] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:20.096 [2024-12-06 16:33:01.768667] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:20.665 16:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:20.665 16:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:17:20.665 16:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:20.665 16:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.665 16:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.665 [2024-12-06 16:33:02.371578] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:20.665 [2024-12-06 16:33:02.371639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:20.665 [2024-12-06 16:33:02.371657] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:20.665 [2024-12-06 16:33:02.371668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:20.665 [2024-12-06 16:33:02.371674] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:20.665 [2024-12-06 16:33:02.371685] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:20.665 [2024-12-06 16:33:02.371691] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:20.665 [2024-12-06 16:33:02.371699] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:20.665 16:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.665 16:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:20.665 16:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:20.665 16:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:20.665 16:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:20.665 16:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:20.665 16:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:20.665 16:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.665 16:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.665 16:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.665 16:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.665 16:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.665 16:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:20.665 16:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.665 16:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.665 16:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.665 16:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.665 "name": "Existed_Raid", 00:17:20.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.665 "strip_size_kb": 64, 00:17:20.665 "state": "configuring", 00:17:20.665 "raid_level": "raid5f", 00:17:20.665 "superblock": false, 00:17:20.665 "num_base_bdevs": 4, 00:17:20.665 "num_base_bdevs_discovered": 0, 00:17:20.665 "num_base_bdevs_operational": 4, 00:17:20.665 "base_bdevs_list": [ 00:17:20.665 { 00:17:20.665 "name": "BaseBdev1", 00:17:20.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.665 "is_configured": false, 00:17:20.665 "data_offset": 0, 00:17:20.665 "data_size": 0 00:17:20.665 }, 00:17:20.665 { 00:17:20.665 "name": "BaseBdev2", 00:17:20.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.665 "is_configured": false, 00:17:20.665 "data_offset": 0, 00:17:20.665 "data_size": 0 00:17:20.665 }, 00:17:20.665 { 00:17:20.665 "name": "BaseBdev3", 00:17:20.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.665 "is_configured": false, 00:17:20.665 "data_offset": 0, 00:17:20.665 "data_size": 0 00:17:20.665 }, 00:17:20.665 { 00:17:20.665 "name": "BaseBdev4", 00:17:20.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.665 "is_configured": false, 00:17:20.665 "data_offset": 0, 00:17:20.665 "data_size": 0 00:17:20.665 } 00:17:20.665 ] 00:17:20.665 }' 00:17:20.665 16:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.665 16:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.235 16:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:21.235 16:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.235 16:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.235 [2024-12-06 16:33:02.794793] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:21.235 [2024-12-06 16:33:02.794843] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:17:21.235 16:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.236 16:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:21.236 16:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.236 16:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.236 [2024-12-06 16:33:02.806751] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:21.236 [2024-12-06 16:33:02.806796] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:21.236 [2024-12-06 16:33:02.806804] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:21.236 [2024-12-06 16:33:02.806813] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:21.236 [2024-12-06 16:33:02.806819] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:21.236 [2024-12-06 16:33:02.806828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:21.236 [2024-12-06 16:33:02.806833] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:21.236 [2024-12-06 16:33:02.806842] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:21.236 16:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.236 16:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:21.236 16:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.236 16:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.236 [2024-12-06 16:33:02.829305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:21.236 BaseBdev1 00:17:21.236 16:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.236 16:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:21.236 16:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:21.236 16:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:21.236 16:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:21.236 16:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:21.236 16:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:21.236 16:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:21.236 16:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.236 16:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.236 16:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.236 16:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:21.236 16:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.236 16:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.236 [ 00:17:21.236 { 00:17:21.236 "name": "BaseBdev1", 00:17:21.236 "aliases": [ 00:17:21.236 "7e66970a-c32a-43ed-8c2a-e66922a95edc" 00:17:21.236 ], 00:17:21.236 "product_name": "Malloc disk", 00:17:21.236 "block_size": 512, 00:17:21.236 "num_blocks": 65536, 00:17:21.236 "uuid": "7e66970a-c32a-43ed-8c2a-e66922a95edc", 00:17:21.236 "assigned_rate_limits": { 00:17:21.236 "rw_ios_per_sec": 0, 00:17:21.236 "rw_mbytes_per_sec": 0, 00:17:21.236 "r_mbytes_per_sec": 0, 00:17:21.236 "w_mbytes_per_sec": 0 00:17:21.236 }, 00:17:21.236 "claimed": true, 00:17:21.236 "claim_type": "exclusive_write", 00:17:21.236 "zoned": false, 00:17:21.236 "supported_io_types": { 00:17:21.236 "read": true, 00:17:21.236 "write": true, 00:17:21.236 "unmap": true, 00:17:21.236 "flush": true, 00:17:21.236 "reset": true, 00:17:21.236 "nvme_admin": false, 00:17:21.236 "nvme_io": false, 00:17:21.236 "nvme_io_md": false, 00:17:21.236 "write_zeroes": true, 00:17:21.236 "zcopy": true, 00:17:21.236 "get_zone_info": false, 00:17:21.236 "zone_management": false, 00:17:21.236 "zone_append": false, 00:17:21.236 "compare": false, 00:17:21.236 "compare_and_write": false, 00:17:21.236 "abort": true, 00:17:21.236 "seek_hole": false, 00:17:21.236 "seek_data": false, 00:17:21.236 "copy": true, 00:17:21.236 "nvme_iov_md": false 00:17:21.236 }, 00:17:21.236 "memory_domains": [ 00:17:21.236 { 00:17:21.236 "dma_device_id": "system", 00:17:21.236 "dma_device_type": 1 00:17:21.236 }, 00:17:21.236 { 00:17:21.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.236 "dma_device_type": 2 00:17:21.236 } 00:17:21.236 ], 00:17:21.236 "driver_specific": {} 00:17:21.236 } 00:17:21.236 ] 00:17:21.236 16:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.236 16:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:21.236 16:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:21.236 16:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:21.236 16:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:21.236 16:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:21.236 16:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:21.236 16:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:21.236 16:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.236 16:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.236 16:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.236 16:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.236 16:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.236 16:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.236 16:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.236 16:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:21.236 16:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.236 16:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.236 "name": "Existed_Raid", 00:17:21.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.236 "strip_size_kb": 64, 00:17:21.236 "state": "configuring", 00:17:21.236 "raid_level": "raid5f", 00:17:21.236 "superblock": false, 00:17:21.236 "num_base_bdevs": 4, 00:17:21.236 "num_base_bdevs_discovered": 1, 00:17:21.236 "num_base_bdevs_operational": 4, 00:17:21.236 "base_bdevs_list": [ 00:17:21.236 { 00:17:21.236 "name": "BaseBdev1", 00:17:21.236 "uuid": "7e66970a-c32a-43ed-8c2a-e66922a95edc", 00:17:21.236 "is_configured": true, 00:17:21.236 "data_offset": 0, 00:17:21.236 "data_size": 65536 00:17:21.236 }, 00:17:21.236 { 00:17:21.236 "name": "BaseBdev2", 00:17:21.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.236 "is_configured": false, 00:17:21.236 "data_offset": 0, 00:17:21.236 "data_size": 0 00:17:21.236 }, 00:17:21.236 { 00:17:21.236 "name": "BaseBdev3", 00:17:21.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.236 "is_configured": false, 00:17:21.236 "data_offset": 0, 00:17:21.236 "data_size": 0 00:17:21.236 }, 00:17:21.236 { 00:17:21.236 "name": "BaseBdev4", 00:17:21.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.236 "is_configured": false, 00:17:21.236 "data_offset": 0, 00:17:21.236 "data_size": 0 00:17:21.236 } 00:17:21.236 ] 00:17:21.236 }' 00:17:21.236 16:33:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.236 16:33:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.496 16:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:21.497 16:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.497 16:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.497 [2024-12-06 16:33:03.308519] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:21.497 [2024-12-06 16:33:03.308583] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:17:21.497 16:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.497 16:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:21.497 16:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.497 16:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.497 [2024-12-06 16:33:03.320533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:21.497 [2024-12-06 16:33:03.322398] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:21.497 [2024-12-06 16:33:03.322438] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:21.497 [2024-12-06 16:33:03.322447] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:21.497 [2024-12-06 16:33:03.322455] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:21.497 [2024-12-06 16:33:03.322461] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:21.497 [2024-12-06 16:33:03.322469] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:21.497 16:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.497 16:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:21.497 16:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:21.497 16:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:21.497 16:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:21.497 16:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:21.497 16:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:21.497 16:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:21.497 16:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:21.497 16:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.497 16:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.497 16:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.497 16:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.756 16:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.756 16:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:21.756 16:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.756 16:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.756 16:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.756 16:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.756 "name": "Existed_Raid", 00:17:21.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.756 "strip_size_kb": 64, 00:17:21.756 "state": "configuring", 00:17:21.756 "raid_level": "raid5f", 00:17:21.756 "superblock": false, 00:17:21.756 "num_base_bdevs": 4, 00:17:21.756 "num_base_bdevs_discovered": 1, 00:17:21.756 "num_base_bdevs_operational": 4, 00:17:21.756 "base_bdevs_list": [ 00:17:21.756 { 00:17:21.756 "name": "BaseBdev1", 00:17:21.756 "uuid": "7e66970a-c32a-43ed-8c2a-e66922a95edc", 00:17:21.756 "is_configured": true, 00:17:21.756 "data_offset": 0, 00:17:21.756 "data_size": 65536 00:17:21.756 }, 00:17:21.756 { 00:17:21.756 "name": "BaseBdev2", 00:17:21.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.756 "is_configured": false, 00:17:21.756 "data_offset": 0, 00:17:21.756 "data_size": 0 00:17:21.756 }, 00:17:21.756 { 00:17:21.756 "name": "BaseBdev3", 00:17:21.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.756 "is_configured": false, 00:17:21.756 "data_offset": 0, 00:17:21.756 "data_size": 0 00:17:21.756 }, 00:17:21.756 { 00:17:21.756 "name": "BaseBdev4", 00:17:21.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.756 "is_configured": false, 00:17:21.756 "data_offset": 0, 00:17:21.756 "data_size": 0 00:17:21.756 } 00:17:21.756 ] 00:17:21.756 }' 00:17:21.756 16:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.756 16:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.016 16:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:22.016 16:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.016 16:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.016 [2024-12-06 16:33:03.782847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:22.016 BaseBdev2 00:17:22.016 16:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.016 16:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:22.016 16:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:22.016 16:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:22.016 16:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:22.016 16:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:22.016 16:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:22.016 16:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:22.016 16:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.016 16:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.016 16:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.016 16:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:22.016 16:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.016 16:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.016 [ 00:17:22.016 { 00:17:22.016 "name": "BaseBdev2", 00:17:22.016 "aliases": [ 00:17:22.016 "7da045c2-4a57-4a14-a08c-803701fa1ec4" 00:17:22.016 ], 00:17:22.016 "product_name": "Malloc disk", 00:17:22.016 "block_size": 512, 00:17:22.016 "num_blocks": 65536, 00:17:22.016 "uuid": "7da045c2-4a57-4a14-a08c-803701fa1ec4", 00:17:22.016 "assigned_rate_limits": { 00:17:22.016 "rw_ios_per_sec": 0, 00:17:22.016 "rw_mbytes_per_sec": 0, 00:17:22.016 "r_mbytes_per_sec": 0, 00:17:22.016 "w_mbytes_per_sec": 0 00:17:22.016 }, 00:17:22.016 "claimed": true, 00:17:22.016 "claim_type": "exclusive_write", 00:17:22.016 "zoned": false, 00:17:22.016 "supported_io_types": { 00:17:22.016 "read": true, 00:17:22.016 "write": true, 00:17:22.016 "unmap": true, 00:17:22.016 "flush": true, 00:17:22.016 "reset": true, 00:17:22.016 "nvme_admin": false, 00:17:22.016 "nvme_io": false, 00:17:22.016 "nvme_io_md": false, 00:17:22.016 "write_zeroes": true, 00:17:22.016 "zcopy": true, 00:17:22.016 "get_zone_info": false, 00:17:22.016 "zone_management": false, 00:17:22.016 "zone_append": false, 00:17:22.016 "compare": false, 00:17:22.016 "compare_and_write": false, 00:17:22.016 "abort": true, 00:17:22.016 "seek_hole": false, 00:17:22.016 "seek_data": false, 00:17:22.016 "copy": true, 00:17:22.016 "nvme_iov_md": false 00:17:22.016 }, 00:17:22.016 "memory_domains": [ 00:17:22.016 { 00:17:22.016 "dma_device_id": "system", 00:17:22.016 "dma_device_type": 1 00:17:22.016 }, 00:17:22.016 { 00:17:22.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:22.016 "dma_device_type": 2 00:17:22.016 } 00:17:22.016 ], 00:17:22.016 "driver_specific": {} 00:17:22.016 } 00:17:22.016 ] 00:17:22.016 16:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.016 16:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:22.016 16:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:22.016 16:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:22.016 16:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:22.016 16:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:22.016 16:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:22.016 16:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:22.016 16:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:22.016 16:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:22.016 16:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:22.016 16:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:22.016 16:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:22.016 16:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:22.016 16:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.016 16:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.016 16:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.016 16:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:22.016 16:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.276 16:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:22.276 "name": "Existed_Raid", 00:17:22.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.276 "strip_size_kb": 64, 00:17:22.276 "state": "configuring", 00:17:22.276 "raid_level": "raid5f", 00:17:22.276 "superblock": false, 00:17:22.276 "num_base_bdevs": 4, 00:17:22.276 "num_base_bdevs_discovered": 2, 00:17:22.276 "num_base_bdevs_operational": 4, 00:17:22.276 "base_bdevs_list": [ 00:17:22.276 { 00:17:22.276 "name": "BaseBdev1", 00:17:22.276 "uuid": "7e66970a-c32a-43ed-8c2a-e66922a95edc", 00:17:22.276 "is_configured": true, 00:17:22.276 "data_offset": 0, 00:17:22.276 "data_size": 65536 00:17:22.276 }, 00:17:22.276 { 00:17:22.276 "name": "BaseBdev2", 00:17:22.276 "uuid": "7da045c2-4a57-4a14-a08c-803701fa1ec4", 00:17:22.276 "is_configured": true, 00:17:22.276 "data_offset": 0, 00:17:22.276 "data_size": 65536 00:17:22.276 }, 00:17:22.276 { 00:17:22.276 "name": "BaseBdev3", 00:17:22.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.276 "is_configured": false, 00:17:22.276 "data_offset": 0, 00:17:22.276 "data_size": 0 00:17:22.276 }, 00:17:22.276 { 00:17:22.276 "name": "BaseBdev4", 00:17:22.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.276 "is_configured": false, 00:17:22.276 "data_offset": 0, 00:17:22.276 "data_size": 0 00:17:22.276 } 00:17:22.276 ] 00:17:22.276 }' 00:17:22.276 16:33:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:22.276 16:33:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.551 16:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:22.551 16:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.551 16:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.551 [2024-12-06 16:33:04.247248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:22.551 BaseBdev3 00:17:22.551 16:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.551 16:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:22.551 16:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:22.551 16:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:22.551 16:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:22.551 16:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:22.551 16:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:22.551 16:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:22.551 16:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.551 16:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.551 16:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.551 16:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:22.551 16:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.551 16:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.551 [ 00:17:22.551 { 00:17:22.551 "name": "BaseBdev3", 00:17:22.551 "aliases": [ 00:17:22.551 "33b8a608-012d-4b4c-b147-603205f31167" 00:17:22.551 ], 00:17:22.551 "product_name": "Malloc disk", 00:17:22.551 "block_size": 512, 00:17:22.551 "num_blocks": 65536, 00:17:22.551 "uuid": "33b8a608-012d-4b4c-b147-603205f31167", 00:17:22.551 "assigned_rate_limits": { 00:17:22.551 "rw_ios_per_sec": 0, 00:17:22.551 "rw_mbytes_per_sec": 0, 00:17:22.551 "r_mbytes_per_sec": 0, 00:17:22.551 "w_mbytes_per_sec": 0 00:17:22.551 }, 00:17:22.551 "claimed": true, 00:17:22.551 "claim_type": "exclusive_write", 00:17:22.551 "zoned": false, 00:17:22.551 "supported_io_types": { 00:17:22.551 "read": true, 00:17:22.551 "write": true, 00:17:22.551 "unmap": true, 00:17:22.551 "flush": true, 00:17:22.551 "reset": true, 00:17:22.551 "nvme_admin": false, 00:17:22.551 "nvme_io": false, 00:17:22.551 "nvme_io_md": false, 00:17:22.551 "write_zeroes": true, 00:17:22.551 "zcopy": true, 00:17:22.551 "get_zone_info": false, 00:17:22.551 "zone_management": false, 00:17:22.551 "zone_append": false, 00:17:22.551 "compare": false, 00:17:22.551 "compare_and_write": false, 00:17:22.551 "abort": true, 00:17:22.551 "seek_hole": false, 00:17:22.551 "seek_data": false, 00:17:22.551 "copy": true, 00:17:22.551 "nvme_iov_md": false 00:17:22.551 }, 00:17:22.551 "memory_domains": [ 00:17:22.551 { 00:17:22.551 "dma_device_id": "system", 00:17:22.551 "dma_device_type": 1 00:17:22.551 }, 00:17:22.551 { 00:17:22.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:22.551 "dma_device_type": 2 00:17:22.551 } 00:17:22.551 ], 00:17:22.551 "driver_specific": {} 00:17:22.551 } 00:17:22.551 ] 00:17:22.551 16:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.551 16:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:22.551 16:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:22.551 16:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:22.551 16:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:22.551 16:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:22.551 16:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:22.551 16:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:22.551 16:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:22.551 16:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:22.551 16:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:22.551 16:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:22.551 16:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:22.551 16:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:22.551 16:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.551 16:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:22.551 16:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.551 16:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.551 16:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.551 16:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:22.551 "name": "Existed_Raid", 00:17:22.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.551 "strip_size_kb": 64, 00:17:22.551 "state": "configuring", 00:17:22.551 "raid_level": "raid5f", 00:17:22.551 "superblock": false, 00:17:22.551 "num_base_bdevs": 4, 00:17:22.551 "num_base_bdevs_discovered": 3, 00:17:22.551 "num_base_bdevs_operational": 4, 00:17:22.551 "base_bdevs_list": [ 00:17:22.551 { 00:17:22.551 "name": "BaseBdev1", 00:17:22.551 "uuid": "7e66970a-c32a-43ed-8c2a-e66922a95edc", 00:17:22.551 "is_configured": true, 00:17:22.551 "data_offset": 0, 00:17:22.551 "data_size": 65536 00:17:22.551 }, 00:17:22.551 { 00:17:22.551 "name": "BaseBdev2", 00:17:22.551 "uuid": "7da045c2-4a57-4a14-a08c-803701fa1ec4", 00:17:22.551 "is_configured": true, 00:17:22.551 "data_offset": 0, 00:17:22.551 "data_size": 65536 00:17:22.551 }, 00:17:22.551 { 00:17:22.551 "name": "BaseBdev3", 00:17:22.551 "uuid": "33b8a608-012d-4b4c-b147-603205f31167", 00:17:22.551 "is_configured": true, 00:17:22.551 "data_offset": 0, 00:17:22.551 "data_size": 65536 00:17:22.551 }, 00:17:22.551 { 00:17:22.551 "name": "BaseBdev4", 00:17:22.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.551 "is_configured": false, 00:17:22.551 "data_offset": 0, 00:17:22.551 "data_size": 0 00:17:22.551 } 00:17:22.551 ] 00:17:22.551 }' 00:17:22.551 16:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:22.551 16:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.118 16:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:23.118 16:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.118 16:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.118 [2024-12-06 16:33:04.773588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:23.118 [2024-12-06 16:33:04.773654] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:17:23.118 [2024-12-06 16:33:04.773671] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:23.118 [2024-12-06 16:33:04.773985] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:23.118 [2024-12-06 16:33:04.774547] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:17:23.118 [2024-12-06 16:33:04.774569] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:17:23.118 [2024-12-06 16:33:04.774761] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:23.118 BaseBdev4 00:17:23.118 16:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.118 16:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:17:23.118 16:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:23.118 16:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:23.118 16:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:23.118 16:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:23.118 16:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:23.118 16:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:23.118 16:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.118 16:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.118 16:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.118 16:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:23.118 16:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.118 16:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.118 [ 00:17:23.118 { 00:17:23.118 "name": "BaseBdev4", 00:17:23.118 "aliases": [ 00:17:23.118 "7d6808b5-113b-41c4-b987-bb8ba0de517a" 00:17:23.118 ], 00:17:23.118 "product_name": "Malloc disk", 00:17:23.118 "block_size": 512, 00:17:23.118 "num_blocks": 65536, 00:17:23.118 "uuid": "7d6808b5-113b-41c4-b987-bb8ba0de517a", 00:17:23.118 "assigned_rate_limits": { 00:17:23.118 "rw_ios_per_sec": 0, 00:17:23.118 "rw_mbytes_per_sec": 0, 00:17:23.118 "r_mbytes_per_sec": 0, 00:17:23.118 "w_mbytes_per_sec": 0 00:17:23.118 }, 00:17:23.118 "claimed": true, 00:17:23.118 "claim_type": "exclusive_write", 00:17:23.118 "zoned": false, 00:17:23.118 "supported_io_types": { 00:17:23.118 "read": true, 00:17:23.118 "write": true, 00:17:23.118 "unmap": true, 00:17:23.118 "flush": true, 00:17:23.118 "reset": true, 00:17:23.118 "nvme_admin": false, 00:17:23.118 "nvme_io": false, 00:17:23.118 "nvme_io_md": false, 00:17:23.118 "write_zeroes": true, 00:17:23.118 "zcopy": true, 00:17:23.118 "get_zone_info": false, 00:17:23.118 "zone_management": false, 00:17:23.118 "zone_append": false, 00:17:23.118 "compare": false, 00:17:23.118 "compare_and_write": false, 00:17:23.118 "abort": true, 00:17:23.118 "seek_hole": false, 00:17:23.118 "seek_data": false, 00:17:23.118 "copy": true, 00:17:23.118 "nvme_iov_md": false 00:17:23.118 }, 00:17:23.118 "memory_domains": [ 00:17:23.118 { 00:17:23.118 "dma_device_id": "system", 00:17:23.118 "dma_device_type": 1 00:17:23.118 }, 00:17:23.118 { 00:17:23.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:23.118 "dma_device_type": 2 00:17:23.118 } 00:17:23.118 ], 00:17:23.118 "driver_specific": {} 00:17:23.118 } 00:17:23.118 ] 00:17:23.118 16:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.118 16:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:23.118 16:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:23.118 16:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:23.118 16:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:23.118 16:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:23.118 16:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:23.118 16:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:23.118 16:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:23.118 16:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:23.118 16:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.118 16:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.118 16:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.118 16:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.118 16:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:23.118 16:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.118 16:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.118 16:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.118 16:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.118 16:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.118 "name": "Existed_Raid", 00:17:23.118 "uuid": "c0ea2970-48b1-4e52-b616-eb63bbd12194", 00:17:23.118 "strip_size_kb": 64, 00:17:23.118 "state": "online", 00:17:23.118 "raid_level": "raid5f", 00:17:23.118 "superblock": false, 00:17:23.118 "num_base_bdevs": 4, 00:17:23.118 "num_base_bdevs_discovered": 4, 00:17:23.118 "num_base_bdevs_operational": 4, 00:17:23.118 "base_bdevs_list": [ 00:17:23.118 { 00:17:23.118 "name": "BaseBdev1", 00:17:23.118 "uuid": "7e66970a-c32a-43ed-8c2a-e66922a95edc", 00:17:23.118 "is_configured": true, 00:17:23.118 "data_offset": 0, 00:17:23.118 "data_size": 65536 00:17:23.118 }, 00:17:23.118 { 00:17:23.118 "name": "BaseBdev2", 00:17:23.118 "uuid": "7da045c2-4a57-4a14-a08c-803701fa1ec4", 00:17:23.118 "is_configured": true, 00:17:23.118 "data_offset": 0, 00:17:23.118 "data_size": 65536 00:17:23.118 }, 00:17:23.118 { 00:17:23.118 "name": "BaseBdev3", 00:17:23.118 "uuid": "33b8a608-012d-4b4c-b147-603205f31167", 00:17:23.118 "is_configured": true, 00:17:23.118 "data_offset": 0, 00:17:23.118 "data_size": 65536 00:17:23.118 }, 00:17:23.118 { 00:17:23.118 "name": "BaseBdev4", 00:17:23.118 "uuid": "7d6808b5-113b-41c4-b987-bb8ba0de517a", 00:17:23.118 "is_configured": true, 00:17:23.118 "data_offset": 0, 00:17:23.118 "data_size": 65536 00:17:23.118 } 00:17:23.118 ] 00:17:23.118 }' 00:17:23.118 16:33:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.118 16:33:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.687 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:23.687 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:23.687 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:23.687 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:23.687 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:23.687 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:23.687 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:23.687 16:33:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.687 16:33:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.687 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:23.687 [2024-12-06 16:33:05.289057] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:23.687 16:33:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.687 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:23.687 "name": "Existed_Raid", 00:17:23.687 "aliases": [ 00:17:23.687 "c0ea2970-48b1-4e52-b616-eb63bbd12194" 00:17:23.687 ], 00:17:23.687 "product_name": "Raid Volume", 00:17:23.687 "block_size": 512, 00:17:23.687 "num_blocks": 196608, 00:17:23.687 "uuid": "c0ea2970-48b1-4e52-b616-eb63bbd12194", 00:17:23.687 "assigned_rate_limits": { 00:17:23.687 "rw_ios_per_sec": 0, 00:17:23.687 "rw_mbytes_per_sec": 0, 00:17:23.687 "r_mbytes_per_sec": 0, 00:17:23.687 "w_mbytes_per_sec": 0 00:17:23.687 }, 00:17:23.687 "claimed": false, 00:17:23.687 "zoned": false, 00:17:23.687 "supported_io_types": { 00:17:23.687 "read": true, 00:17:23.687 "write": true, 00:17:23.687 "unmap": false, 00:17:23.687 "flush": false, 00:17:23.687 "reset": true, 00:17:23.687 "nvme_admin": false, 00:17:23.687 "nvme_io": false, 00:17:23.687 "nvme_io_md": false, 00:17:23.687 "write_zeroes": true, 00:17:23.687 "zcopy": false, 00:17:23.687 "get_zone_info": false, 00:17:23.687 "zone_management": false, 00:17:23.687 "zone_append": false, 00:17:23.687 "compare": false, 00:17:23.687 "compare_and_write": false, 00:17:23.687 "abort": false, 00:17:23.687 "seek_hole": false, 00:17:23.687 "seek_data": false, 00:17:23.687 "copy": false, 00:17:23.687 "nvme_iov_md": false 00:17:23.687 }, 00:17:23.687 "driver_specific": { 00:17:23.687 "raid": { 00:17:23.687 "uuid": "c0ea2970-48b1-4e52-b616-eb63bbd12194", 00:17:23.687 "strip_size_kb": 64, 00:17:23.687 "state": "online", 00:17:23.687 "raid_level": "raid5f", 00:17:23.687 "superblock": false, 00:17:23.687 "num_base_bdevs": 4, 00:17:23.687 "num_base_bdevs_discovered": 4, 00:17:23.687 "num_base_bdevs_operational": 4, 00:17:23.687 "base_bdevs_list": [ 00:17:23.687 { 00:17:23.687 "name": "BaseBdev1", 00:17:23.687 "uuid": "7e66970a-c32a-43ed-8c2a-e66922a95edc", 00:17:23.687 "is_configured": true, 00:17:23.687 "data_offset": 0, 00:17:23.687 "data_size": 65536 00:17:23.687 }, 00:17:23.687 { 00:17:23.687 "name": "BaseBdev2", 00:17:23.687 "uuid": "7da045c2-4a57-4a14-a08c-803701fa1ec4", 00:17:23.687 "is_configured": true, 00:17:23.687 "data_offset": 0, 00:17:23.687 "data_size": 65536 00:17:23.687 }, 00:17:23.687 { 00:17:23.687 "name": "BaseBdev3", 00:17:23.687 "uuid": "33b8a608-012d-4b4c-b147-603205f31167", 00:17:23.687 "is_configured": true, 00:17:23.687 "data_offset": 0, 00:17:23.687 "data_size": 65536 00:17:23.687 }, 00:17:23.687 { 00:17:23.687 "name": "BaseBdev4", 00:17:23.687 "uuid": "7d6808b5-113b-41c4-b987-bb8ba0de517a", 00:17:23.687 "is_configured": true, 00:17:23.687 "data_offset": 0, 00:17:23.687 "data_size": 65536 00:17:23.687 } 00:17:23.687 ] 00:17:23.687 } 00:17:23.687 } 00:17:23.687 }' 00:17:23.687 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:23.687 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:23.687 BaseBdev2 00:17:23.687 BaseBdev3 00:17:23.687 BaseBdev4' 00:17:23.687 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:23.687 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:23.687 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:23.687 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:23.687 16:33:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.687 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:23.687 16:33:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.687 16:33:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.687 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:23.687 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:23.687 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:23.687 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:23.687 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:23.687 16:33:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.687 16:33:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.947 16:33:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.947 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:23.947 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:23.947 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:23.947 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:23.947 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:23.947 16:33:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.947 16:33:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.947 16:33:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.947 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:23.947 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:23.947 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:23.947 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:23.947 16:33:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.947 16:33:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.947 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:23.947 16:33:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.947 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:23.947 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:23.947 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:23.947 16:33:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.947 16:33:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.947 [2024-12-06 16:33:05.648352] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:23.947 16:33:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.947 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:23.947 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:17:23.947 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:23.947 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:23.947 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:23.947 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:23.947 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:23.947 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:23.947 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:23.947 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:23.947 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:23.947 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.947 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.947 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.947 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.947 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.947 16:33:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.947 16:33:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.947 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:23.947 16:33:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.947 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.947 "name": "Existed_Raid", 00:17:23.947 "uuid": "c0ea2970-48b1-4e52-b616-eb63bbd12194", 00:17:23.947 "strip_size_kb": 64, 00:17:23.947 "state": "online", 00:17:23.947 "raid_level": "raid5f", 00:17:23.947 "superblock": false, 00:17:23.947 "num_base_bdevs": 4, 00:17:23.947 "num_base_bdevs_discovered": 3, 00:17:23.947 "num_base_bdevs_operational": 3, 00:17:23.947 "base_bdevs_list": [ 00:17:23.947 { 00:17:23.947 "name": null, 00:17:23.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.947 "is_configured": false, 00:17:23.947 "data_offset": 0, 00:17:23.947 "data_size": 65536 00:17:23.947 }, 00:17:23.947 { 00:17:23.947 "name": "BaseBdev2", 00:17:23.947 "uuid": "7da045c2-4a57-4a14-a08c-803701fa1ec4", 00:17:23.947 "is_configured": true, 00:17:23.947 "data_offset": 0, 00:17:23.947 "data_size": 65536 00:17:23.947 }, 00:17:23.947 { 00:17:23.947 "name": "BaseBdev3", 00:17:23.947 "uuid": "33b8a608-012d-4b4c-b147-603205f31167", 00:17:23.947 "is_configured": true, 00:17:23.947 "data_offset": 0, 00:17:23.947 "data_size": 65536 00:17:23.947 }, 00:17:23.947 { 00:17:23.947 "name": "BaseBdev4", 00:17:23.947 "uuid": "7d6808b5-113b-41c4-b987-bb8ba0de517a", 00:17:23.947 "is_configured": true, 00:17:23.947 "data_offset": 0, 00:17:23.947 "data_size": 65536 00:17:23.947 } 00:17:23.947 ] 00:17:23.947 }' 00:17:23.947 16:33:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.947 16:33:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.516 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:24.516 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:24.516 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.516 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:24.516 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.516 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.516 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.516 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:24.516 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:24.516 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:24.516 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.516 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.516 [2024-12-06 16:33:06.186995] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:24.516 [2024-12-06 16:33:06.187183] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:24.516 [2024-12-06 16:33:06.198400] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:24.516 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.516 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:24.516 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:24.516 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.516 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:24.516 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.516 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.516 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.516 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:24.516 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:24.516 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:24.516 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.516 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.516 [2024-12-06 16:33:06.254398] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:24.516 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.516 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:24.516 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:24.516 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.516 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.516 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.516 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:24.516 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.516 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:24.516 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:24.516 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:17:24.516 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.516 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.516 [2024-12-06 16:33:06.325811] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:24.516 [2024-12-06 16:33:06.325959] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:17:24.516 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.516 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:24.516 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:24.517 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.517 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.517 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.517 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:24.778 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.778 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:24.778 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:24.778 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:17:24.778 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:24.778 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:24.778 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:24.778 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.778 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.778 BaseBdev2 00:17:24.778 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.778 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:24.778 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:24.778 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:24.778 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:24.778 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:24.778 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:24.778 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:24.778 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.778 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.778 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.778 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:24.779 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.779 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.779 [ 00:17:24.779 { 00:17:24.779 "name": "BaseBdev2", 00:17:24.779 "aliases": [ 00:17:24.779 "0e5d2567-819f-49e7-bc5b-66a4f53c3e28" 00:17:24.779 ], 00:17:24.779 "product_name": "Malloc disk", 00:17:24.779 "block_size": 512, 00:17:24.779 "num_blocks": 65536, 00:17:24.779 "uuid": "0e5d2567-819f-49e7-bc5b-66a4f53c3e28", 00:17:24.779 "assigned_rate_limits": { 00:17:24.779 "rw_ios_per_sec": 0, 00:17:24.779 "rw_mbytes_per_sec": 0, 00:17:24.779 "r_mbytes_per_sec": 0, 00:17:24.779 "w_mbytes_per_sec": 0 00:17:24.779 }, 00:17:24.779 "claimed": false, 00:17:24.779 "zoned": false, 00:17:24.779 "supported_io_types": { 00:17:24.779 "read": true, 00:17:24.779 "write": true, 00:17:24.779 "unmap": true, 00:17:24.779 "flush": true, 00:17:24.779 "reset": true, 00:17:24.779 "nvme_admin": false, 00:17:24.779 "nvme_io": false, 00:17:24.779 "nvme_io_md": false, 00:17:24.779 "write_zeroes": true, 00:17:24.779 "zcopy": true, 00:17:24.779 "get_zone_info": false, 00:17:24.779 "zone_management": false, 00:17:24.779 "zone_append": false, 00:17:24.779 "compare": false, 00:17:24.779 "compare_and_write": false, 00:17:24.779 "abort": true, 00:17:24.779 "seek_hole": false, 00:17:24.779 "seek_data": false, 00:17:24.779 "copy": true, 00:17:24.779 "nvme_iov_md": false 00:17:24.779 }, 00:17:24.779 "memory_domains": [ 00:17:24.779 { 00:17:24.779 "dma_device_id": "system", 00:17:24.779 "dma_device_type": 1 00:17:24.779 }, 00:17:24.779 { 00:17:24.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.779 "dma_device_type": 2 00:17:24.779 } 00:17:24.779 ], 00:17:24.779 "driver_specific": {} 00:17:24.779 } 00:17:24.779 ] 00:17:24.779 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.779 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:24.779 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:24.779 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:24.779 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:24.779 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.779 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.779 BaseBdev3 00:17:24.779 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.779 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:24.779 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:24.779 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:24.779 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:24.779 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:24.779 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:24.779 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:24.779 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.779 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.779 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.779 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:24.779 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.779 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.779 [ 00:17:24.779 { 00:17:24.779 "name": "BaseBdev3", 00:17:24.779 "aliases": [ 00:17:24.779 "ad9d6589-4532-46c3-89dd-3c9554fcf3f2" 00:17:24.779 ], 00:17:24.779 "product_name": "Malloc disk", 00:17:24.779 "block_size": 512, 00:17:24.779 "num_blocks": 65536, 00:17:24.779 "uuid": "ad9d6589-4532-46c3-89dd-3c9554fcf3f2", 00:17:24.779 "assigned_rate_limits": { 00:17:24.779 "rw_ios_per_sec": 0, 00:17:24.779 "rw_mbytes_per_sec": 0, 00:17:24.779 "r_mbytes_per_sec": 0, 00:17:24.779 "w_mbytes_per_sec": 0 00:17:24.779 }, 00:17:24.779 "claimed": false, 00:17:24.779 "zoned": false, 00:17:24.779 "supported_io_types": { 00:17:24.779 "read": true, 00:17:24.779 "write": true, 00:17:24.779 "unmap": true, 00:17:24.779 "flush": true, 00:17:24.779 "reset": true, 00:17:24.779 "nvme_admin": false, 00:17:24.779 "nvme_io": false, 00:17:24.779 "nvme_io_md": false, 00:17:24.779 "write_zeroes": true, 00:17:24.779 "zcopy": true, 00:17:24.779 "get_zone_info": false, 00:17:24.779 "zone_management": false, 00:17:24.779 "zone_append": false, 00:17:24.779 "compare": false, 00:17:24.779 "compare_and_write": false, 00:17:24.779 "abort": true, 00:17:24.779 "seek_hole": false, 00:17:24.779 "seek_data": false, 00:17:24.779 "copy": true, 00:17:24.779 "nvme_iov_md": false 00:17:24.779 }, 00:17:24.779 "memory_domains": [ 00:17:24.779 { 00:17:24.779 "dma_device_id": "system", 00:17:24.779 "dma_device_type": 1 00:17:24.779 }, 00:17:24.779 { 00:17:24.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.779 "dma_device_type": 2 00:17:24.779 } 00:17:24.779 ], 00:17:24.779 "driver_specific": {} 00:17:24.779 } 00:17:24.779 ] 00:17:24.779 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.779 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:24.779 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:24.779 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:24.779 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:24.779 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.779 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.779 BaseBdev4 00:17:24.779 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.779 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:17:24.779 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:24.779 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:24.779 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:24.779 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:24.779 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:24.779 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:24.779 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.779 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.779 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.779 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:24.779 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.779 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.779 [ 00:17:24.779 { 00:17:24.779 "name": "BaseBdev4", 00:17:24.779 "aliases": [ 00:17:24.779 "5fa13c69-d227-4088-8d95-b1a57cb0d3b2" 00:17:24.779 ], 00:17:24.779 "product_name": "Malloc disk", 00:17:24.779 "block_size": 512, 00:17:24.779 "num_blocks": 65536, 00:17:24.779 "uuid": "5fa13c69-d227-4088-8d95-b1a57cb0d3b2", 00:17:24.779 "assigned_rate_limits": { 00:17:24.779 "rw_ios_per_sec": 0, 00:17:24.779 "rw_mbytes_per_sec": 0, 00:17:24.779 "r_mbytes_per_sec": 0, 00:17:24.779 "w_mbytes_per_sec": 0 00:17:24.779 }, 00:17:24.779 "claimed": false, 00:17:24.779 "zoned": false, 00:17:24.779 "supported_io_types": { 00:17:24.779 "read": true, 00:17:24.779 "write": true, 00:17:24.779 "unmap": true, 00:17:24.779 "flush": true, 00:17:24.779 "reset": true, 00:17:24.779 "nvme_admin": false, 00:17:24.779 "nvme_io": false, 00:17:24.779 "nvme_io_md": false, 00:17:24.779 "write_zeroes": true, 00:17:24.779 "zcopy": true, 00:17:24.779 "get_zone_info": false, 00:17:24.779 "zone_management": false, 00:17:24.779 "zone_append": false, 00:17:24.779 "compare": false, 00:17:24.779 "compare_and_write": false, 00:17:24.779 "abort": true, 00:17:24.779 "seek_hole": false, 00:17:24.779 "seek_data": false, 00:17:24.779 "copy": true, 00:17:24.779 "nvme_iov_md": false 00:17:24.779 }, 00:17:24.779 "memory_domains": [ 00:17:24.779 { 00:17:24.779 "dma_device_id": "system", 00:17:24.779 "dma_device_type": 1 00:17:24.779 }, 00:17:24.779 { 00:17:24.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.779 "dma_device_type": 2 00:17:24.779 } 00:17:24.779 ], 00:17:24.779 "driver_specific": {} 00:17:24.779 } 00:17:24.779 ] 00:17:24.779 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.780 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:24.780 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:24.780 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:24.780 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:24.780 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.780 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.780 [2024-12-06 16:33:06.556189] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:24.780 [2024-12-06 16:33:06.556296] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:24.780 [2024-12-06 16:33:06.556347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:24.780 [2024-12-06 16:33:06.558306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:24.780 [2024-12-06 16:33:06.558413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:24.780 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.780 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:24.780 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:24.780 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:24.780 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:24.780 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:24.780 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:24.780 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.780 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.780 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.780 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.780 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.780 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:24.780 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.780 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.780 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.040 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.040 "name": "Existed_Raid", 00:17:25.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.040 "strip_size_kb": 64, 00:17:25.040 "state": "configuring", 00:17:25.040 "raid_level": "raid5f", 00:17:25.040 "superblock": false, 00:17:25.040 "num_base_bdevs": 4, 00:17:25.040 "num_base_bdevs_discovered": 3, 00:17:25.040 "num_base_bdevs_operational": 4, 00:17:25.040 "base_bdevs_list": [ 00:17:25.040 { 00:17:25.040 "name": "BaseBdev1", 00:17:25.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.040 "is_configured": false, 00:17:25.040 "data_offset": 0, 00:17:25.040 "data_size": 0 00:17:25.040 }, 00:17:25.040 { 00:17:25.040 "name": "BaseBdev2", 00:17:25.040 "uuid": "0e5d2567-819f-49e7-bc5b-66a4f53c3e28", 00:17:25.040 "is_configured": true, 00:17:25.040 "data_offset": 0, 00:17:25.040 "data_size": 65536 00:17:25.040 }, 00:17:25.040 { 00:17:25.040 "name": "BaseBdev3", 00:17:25.040 "uuid": "ad9d6589-4532-46c3-89dd-3c9554fcf3f2", 00:17:25.040 "is_configured": true, 00:17:25.040 "data_offset": 0, 00:17:25.040 "data_size": 65536 00:17:25.040 }, 00:17:25.040 { 00:17:25.040 "name": "BaseBdev4", 00:17:25.040 "uuid": "5fa13c69-d227-4088-8d95-b1a57cb0d3b2", 00:17:25.040 "is_configured": true, 00:17:25.040 "data_offset": 0, 00:17:25.040 "data_size": 65536 00:17:25.040 } 00:17:25.040 ] 00:17:25.040 }' 00:17:25.040 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.040 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.299 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:25.299 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.299 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.299 [2024-12-06 16:33:06.995525] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:25.299 16:33:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.299 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:25.299 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:25.299 16:33:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:25.299 16:33:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:25.299 16:33:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:25.299 16:33:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:25.299 16:33:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.300 16:33:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.300 16:33:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.300 16:33:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.300 16:33:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.300 16:33:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.300 16:33:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.300 16:33:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.300 16:33:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.300 16:33:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.300 "name": "Existed_Raid", 00:17:25.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.300 "strip_size_kb": 64, 00:17:25.300 "state": "configuring", 00:17:25.300 "raid_level": "raid5f", 00:17:25.300 "superblock": false, 00:17:25.300 "num_base_bdevs": 4, 00:17:25.300 "num_base_bdevs_discovered": 2, 00:17:25.300 "num_base_bdevs_operational": 4, 00:17:25.300 "base_bdevs_list": [ 00:17:25.300 { 00:17:25.300 "name": "BaseBdev1", 00:17:25.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.300 "is_configured": false, 00:17:25.300 "data_offset": 0, 00:17:25.300 "data_size": 0 00:17:25.300 }, 00:17:25.300 { 00:17:25.300 "name": null, 00:17:25.300 "uuid": "0e5d2567-819f-49e7-bc5b-66a4f53c3e28", 00:17:25.300 "is_configured": false, 00:17:25.300 "data_offset": 0, 00:17:25.300 "data_size": 65536 00:17:25.300 }, 00:17:25.300 { 00:17:25.300 "name": "BaseBdev3", 00:17:25.300 "uuid": "ad9d6589-4532-46c3-89dd-3c9554fcf3f2", 00:17:25.300 "is_configured": true, 00:17:25.300 "data_offset": 0, 00:17:25.300 "data_size": 65536 00:17:25.300 }, 00:17:25.300 { 00:17:25.300 "name": "BaseBdev4", 00:17:25.300 "uuid": "5fa13c69-d227-4088-8d95-b1a57cb0d3b2", 00:17:25.300 "is_configured": true, 00:17:25.300 "data_offset": 0, 00:17:25.300 "data_size": 65536 00:17:25.300 } 00:17:25.300 ] 00:17:25.300 }' 00:17:25.300 16:33:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.300 16:33:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.869 16:33:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.869 16:33:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:25.869 16:33:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.869 16:33:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.869 16:33:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.869 16:33:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:25.869 16:33:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:25.869 16:33:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.869 16:33:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.869 [2024-12-06 16:33:07.525707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:25.869 BaseBdev1 00:17:25.869 16:33:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.869 16:33:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:25.869 16:33:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:25.869 16:33:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:25.869 16:33:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:25.869 16:33:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:25.869 16:33:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:25.869 16:33:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:25.869 16:33:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.869 16:33:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.869 16:33:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.869 16:33:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:25.869 16:33:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.869 16:33:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.869 [ 00:17:25.869 { 00:17:25.869 "name": "BaseBdev1", 00:17:25.869 "aliases": [ 00:17:25.869 "b540d7c9-17e0-491f-b754-922deacef796" 00:17:25.869 ], 00:17:25.869 "product_name": "Malloc disk", 00:17:25.869 "block_size": 512, 00:17:25.869 "num_blocks": 65536, 00:17:25.869 "uuid": "b540d7c9-17e0-491f-b754-922deacef796", 00:17:25.869 "assigned_rate_limits": { 00:17:25.869 "rw_ios_per_sec": 0, 00:17:25.869 "rw_mbytes_per_sec": 0, 00:17:25.869 "r_mbytes_per_sec": 0, 00:17:25.869 "w_mbytes_per_sec": 0 00:17:25.869 }, 00:17:25.869 "claimed": true, 00:17:25.869 "claim_type": "exclusive_write", 00:17:25.869 "zoned": false, 00:17:25.869 "supported_io_types": { 00:17:25.869 "read": true, 00:17:25.869 "write": true, 00:17:25.869 "unmap": true, 00:17:25.869 "flush": true, 00:17:25.869 "reset": true, 00:17:25.869 "nvme_admin": false, 00:17:25.869 "nvme_io": false, 00:17:25.869 "nvme_io_md": false, 00:17:25.869 "write_zeroes": true, 00:17:25.869 "zcopy": true, 00:17:25.869 "get_zone_info": false, 00:17:25.869 "zone_management": false, 00:17:25.869 "zone_append": false, 00:17:25.869 "compare": false, 00:17:25.870 "compare_and_write": false, 00:17:25.870 "abort": true, 00:17:25.870 "seek_hole": false, 00:17:25.870 "seek_data": false, 00:17:25.870 "copy": true, 00:17:25.870 "nvme_iov_md": false 00:17:25.870 }, 00:17:25.870 "memory_domains": [ 00:17:25.870 { 00:17:25.870 "dma_device_id": "system", 00:17:25.870 "dma_device_type": 1 00:17:25.870 }, 00:17:25.870 { 00:17:25.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:25.870 "dma_device_type": 2 00:17:25.870 } 00:17:25.870 ], 00:17:25.870 "driver_specific": {} 00:17:25.870 } 00:17:25.870 ] 00:17:25.870 16:33:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.870 16:33:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:25.870 16:33:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:25.870 16:33:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:25.870 16:33:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:25.870 16:33:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:25.870 16:33:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:25.870 16:33:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:25.870 16:33:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.870 16:33:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.870 16:33:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.870 16:33:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.870 16:33:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.870 16:33:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.870 16:33:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.870 16:33:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.870 16:33:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.870 16:33:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.870 "name": "Existed_Raid", 00:17:25.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.870 "strip_size_kb": 64, 00:17:25.870 "state": "configuring", 00:17:25.870 "raid_level": "raid5f", 00:17:25.870 "superblock": false, 00:17:25.870 "num_base_bdevs": 4, 00:17:25.870 "num_base_bdevs_discovered": 3, 00:17:25.870 "num_base_bdevs_operational": 4, 00:17:25.870 "base_bdevs_list": [ 00:17:25.870 { 00:17:25.870 "name": "BaseBdev1", 00:17:25.870 "uuid": "b540d7c9-17e0-491f-b754-922deacef796", 00:17:25.870 "is_configured": true, 00:17:25.870 "data_offset": 0, 00:17:25.870 "data_size": 65536 00:17:25.870 }, 00:17:25.870 { 00:17:25.870 "name": null, 00:17:25.870 "uuid": "0e5d2567-819f-49e7-bc5b-66a4f53c3e28", 00:17:25.870 "is_configured": false, 00:17:25.870 "data_offset": 0, 00:17:25.870 "data_size": 65536 00:17:25.870 }, 00:17:25.870 { 00:17:25.870 "name": "BaseBdev3", 00:17:25.870 "uuid": "ad9d6589-4532-46c3-89dd-3c9554fcf3f2", 00:17:25.870 "is_configured": true, 00:17:25.870 "data_offset": 0, 00:17:25.870 "data_size": 65536 00:17:25.870 }, 00:17:25.870 { 00:17:25.870 "name": "BaseBdev4", 00:17:25.870 "uuid": "5fa13c69-d227-4088-8d95-b1a57cb0d3b2", 00:17:25.870 "is_configured": true, 00:17:25.870 "data_offset": 0, 00:17:25.870 "data_size": 65536 00:17:25.870 } 00:17:25.870 ] 00:17:25.870 }' 00:17:25.870 16:33:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.870 16:33:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.438 16:33:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.438 16:33:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.438 16:33:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.438 16:33:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:26.438 16:33:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.438 16:33:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:26.438 16:33:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:26.438 16:33:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.438 16:33:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.438 [2024-12-06 16:33:08.056888] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:26.438 16:33:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.438 16:33:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:26.438 16:33:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:26.438 16:33:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:26.438 16:33:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:26.438 16:33:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:26.438 16:33:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:26.438 16:33:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.438 16:33:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.438 16:33:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.438 16:33:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.438 16:33:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.438 16:33:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.438 16:33:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.438 16:33:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.438 16:33:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.438 16:33:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.438 "name": "Existed_Raid", 00:17:26.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.438 "strip_size_kb": 64, 00:17:26.438 "state": "configuring", 00:17:26.438 "raid_level": "raid5f", 00:17:26.438 "superblock": false, 00:17:26.438 "num_base_bdevs": 4, 00:17:26.438 "num_base_bdevs_discovered": 2, 00:17:26.439 "num_base_bdevs_operational": 4, 00:17:26.439 "base_bdevs_list": [ 00:17:26.439 { 00:17:26.439 "name": "BaseBdev1", 00:17:26.439 "uuid": "b540d7c9-17e0-491f-b754-922deacef796", 00:17:26.439 "is_configured": true, 00:17:26.439 "data_offset": 0, 00:17:26.439 "data_size": 65536 00:17:26.439 }, 00:17:26.439 { 00:17:26.439 "name": null, 00:17:26.439 "uuid": "0e5d2567-819f-49e7-bc5b-66a4f53c3e28", 00:17:26.439 "is_configured": false, 00:17:26.439 "data_offset": 0, 00:17:26.439 "data_size": 65536 00:17:26.439 }, 00:17:26.439 { 00:17:26.439 "name": null, 00:17:26.439 "uuid": "ad9d6589-4532-46c3-89dd-3c9554fcf3f2", 00:17:26.439 "is_configured": false, 00:17:26.439 "data_offset": 0, 00:17:26.439 "data_size": 65536 00:17:26.439 }, 00:17:26.439 { 00:17:26.439 "name": "BaseBdev4", 00:17:26.439 "uuid": "5fa13c69-d227-4088-8d95-b1a57cb0d3b2", 00:17:26.439 "is_configured": true, 00:17:26.439 "data_offset": 0, 00:17:26.439 "data_size": 65536 00:17:26.439 } 00:17:26.439 ] 00:17:26.439 }' 00:17:26.439 16:33:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.439 16:33:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.698 16:33:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.698 16:33:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:26.698 16:33:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.698 16:33:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.958 16:33:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.958 16:33:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:26.958 16:33:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:26.958 16:33:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.958 16:33:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.959 [2024-12-06 16:33:08.572079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:26.959 16:33:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.959 16:33:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:26.959 16:33:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:26.959 16:33:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:26.959 16:33:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:26.959 16:33:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:26.959 16:33:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:26.959 16:33:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.959 16:33:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.959 16:33:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.959 16:33:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.959 16:33:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.959 16:33:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.959 16:33:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.959 16:33:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.959 16:33:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.959 16:33:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.959 "name": "Existed_Raid", 00:17:26.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.959 "strip_size_kb": 64, 00:17:26.959 "state": "configuring", 00:17:26.959 "raid_level": "raid5f", 00:17:26.959 "superblock": false, 00:17:26.959 "num_base_bdevs": 4, 00:17:26.959 "num_base_bdevs_discovered": 3, 00:17:26.959 "num_base_bdevs_operational": 4, 00:17:26.959 "base_bdevs_list": [ 00:17:26.959 { 00:17:26.959 "name": "BaseBdev1", 00:17:26.959 "uuid": "b540d7c9-17e0-491f-b754-922deacef796", 00:17:26.959 "is_configured": true, 00:17:26.959 "data_offset": 0, 00:17:26.959 "data_size": 65536 00:17:26.959 }, 00:17:26.982 { 00:17:26.982 "name": null, 00:17:26.982 "uuid": "0e5d2567-819f-49e7-bc5b-66a4f53c3e28", 00:17:26.982 "is_configured": false, 00:17:26.982 "data_offset": 0, 00:17:26.982 "data_size": 65536 00:17:26.982 }, 00:17:26.982 { 00:17:26.982 "name": "BaseBdev3", 00:17:26.982 "uuid": "ad9d6589-4532-46c3-89dd-3c9554fcf3f2", 00:17:26.982 "is_configured": true, 00:17:26.982 "data_offset": 0, 00:17:26.982 "data_size": 65536 00:17:26.982 }, 00:17:26.982 { 00:17:26.982 "name": "BaseBdev4", 00:17:26.982 "uuid": "5fa13c69-d227-4088-8d95-b1a57cb0d3b2", 00:17:26.982 "is_configured": true, 00:17:26.982 "data_offset": 0, 00:17:26.982 "data_size": 65536 00:17:26.982 } 00:17:26.982 ] 00:17:26.982 }' 00:17:26.982 16:33:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.982 16:33:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.241 16:33:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.241 16:33:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:27.241 16:33:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.241 16:33:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.241 16:33:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.500 16:33:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:27.500 16:33:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:27.500 16:33:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.500 16:33:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.500 [2024-12-06 16:33:09.087189] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:27.500 16:33:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.500 16:33:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:27.500 16:33:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:27.500 16:33:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:27.500 16:33:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:27.500 16:33:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:27.500 16:33:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:27.500 16:33:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.500 16:33:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.500 16:33:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.500 16:33:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.500 16:33:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.500 16:33:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.500 16:33:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.500 16:33:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:27.500 16:33:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.500 16:33:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:27.500 "name": "Existed_Raid", 00:17:27.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.500 "strip_size_kb": 64, 00:17:27.500 "state": "configuring", 00:17:27.500 "raid_level": "raid5f", 00:17:27.500 "superblock": false, 00:17:27.500 "num_base_bdevs": 4, 00:17:27.500 "num_base_bdevs_discovered": 2, 00:17:27.500 "num_base_bdevs_operational": 4, 00:17:27.500 "base_bdevs_list": [ 00:17:27.500 { 00:17:27.500 "name": null, 00:17:27.500 "uuid": "b540d7c9-17e0-491f-b754-922deacef796", 00:17:27.500 "is_configured": false, 00:17:27.500 "data_offset": 0, 00:17:27.500 "data_size": 65536 00:17:27.500 }, 00:17:27.500 { 00:17:27.500 "name": null, 00:17:27.500 "uuid": "0e5d2567-819f-49e7-bc5b-66a4f53c3e28", 00:17:27.500 "is_configured": false, 00:17:27.500 "data_offset": 0, 00:17:27.500 "data_size": 65536 00:17:27.500 }, 00:17:27.500 { 00:17:27.500 "name": "BaseBdev3", 00:17:27.500 "uuid": "ad9d6589-4532-46c3-89dd-3c9554fcf3f2", 00:17:27.500 "is_configured": true, 00:17:27.500 "data_offset": 0, 00:17:27.500 "data_size": 65536 00:17:27.500 }, 00:17:27.500 { 00:17:27.500 "name": "BaseBdev4", 00:17:27.500 "uuid": "5fa13c69-d227-4088-8d95-b1a57cb0d3b2", 00:17:27.500 "is_configured": true, 00:17:27.500 "data_offset": 0, 00:17:27.500 "data_size": 65536 00:17:27.500 } 00:17:27.500 ] 00:17:27.500 }' 00:17:27.500 16:33:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:27.500 16:33:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.759 16:33:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.759 16:33:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.759 16:33:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.759 16:33:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:27.759 16:33:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.759 16:33:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:27.759 16:33:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:27.759 16:33:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.759 16:33:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.760 [2024-12-06 16:33:09.585055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:27.760 16:33:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.760 16:33:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:27.760 16:33:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:27.760 16:33:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:27.760 16:33:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:27.760 16:33:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:27.760 16:33:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:27.760 16:33:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.760 16:33:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.760 16:33:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.760 16:33:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.760 16:33:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.760 16:33:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:28.018 16:33:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.018 16:33:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.019 16:33:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.019 16:33:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.019 "name": "Existed_Raid", 00:17:28.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.019 "strip_size_kb": 64, 00:17:28.019 "state": "configuring", 00:17:28.019 "raid_level": "raid5f", 00:17:28.019 "superblock": false, 00:17:28.019 "num_base_bdevs": 4, 00:17:28.019 "num_base_bdevs_discovered": 3, 00:17:28.019 "num_base_bdevs_operational": 4, 00:17:28.019 "base_bdevs_list": [ 00:17:28.019 { 00:17:28.019 "name": null, 00:17:28.019 "uuid": "b540d7c9-17e0-491f-b754-922deacef796", 00:17:28.019 "is_configured": false, 00:17:28.019 "data_offset": 0, 00:17:28.019 "data_size": 65536 00:17:28.019 }, 00:17:28.019 { 00:17:28.019 "name": "BaseBdev2", 00:17:28.019 "uuid": "0e5d2567-819f-49e7-bc5b-66a4f53c3e28", 00:17:28.019 "is_configured": true, 00:17:28.019 "data_offset": 0, 00:17:28.019 "data_size": 65536 00:17:28.019 }, 00:17:28.019 { 00:17:28.019 "name": "BaseBdev3", 00:17:28.019 "uuid": "ad9d6589-4532-46c3-89dd-3c9554fcf3f2", 00:17:28.019 "is_configured": true, 00:17:28.019 "data_offset": 0, 00:17:28.019 "data_size": 65536 00:17:28.019 }, 00:17:28.019 { 00:17:28.019 "name": "BaseBdev4", 00:17:28.019 "uuid": "5fa13c69-d227-4088-8d95-b1a57cb0d3b2", 00:17:28.019 "is_configured": true, 00:17:28.019 "data_offset": 0, 00:17:28.019 "data_size": 65536 00:17:28.019 } 00:17:28.019 ] 00:17:28.019 }' 00:17:28.019 16:33:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.019 16:33:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.277 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.277 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.277 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.277 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:28.277 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.277 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:28.277 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:28.277 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.277 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.277 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.277 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.277 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b540d7c9-17e0-491f-b754-922deacef796 00:17:28.277 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.277 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.535 [2024-12-06 16:33:10.115521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:28.535 [2024-12-06 16:33:10.115656] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:17:28.535 [2024-12-06 16:33:10.115686] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:28.535 [2024-12-06 16:33:10.116019] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:28.535 [2024-12-06 16:33:10.116558] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:17:28.535 [2024-12-06 16:33:10.116617] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:17:28.535 [2024-12-06 16:33:10.116854] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:28.535 NewBaseBdev 00:17:28.535 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.535 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:28.535 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:17:28.535 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:28.535 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:28.535 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:28.535 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:28.535 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:28.535 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.535 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.535 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.535 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:28.535 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.535 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.535 [ 00:17:28.535 { 00:17:28.535 "name": "NewBaseBdev", 00:17:28.535 "aliases": [ 00:17:28.535 "b540d7c9-17e0-491f-b754-922deacef796" 00:17:28.535 ], 00:17:28.535 "product_name": "Malloc disk", 00:17:28.535 "block_size": 512, 00:17:28.535 "num_blocks": 65536, 00:17:28.535 "uuid": "b540d7c9-17e0-491f-b754-922deacef796", 00:17:28.535 "assigned_rate_limits": { 00:17:28.535 "rw_ios_per_sec": 0, 00:17:28.535 "rw_mbytes_per_sec": 0, 00:17:28.535 "r_mbytes_per_sec": 0, 00:17:28.535 "w_mbytes_per_sec": 0 00:17:28.535 }, 00:17:28.535 "claimed": true, 00:17:28.535 "claim_type": "exclusive_write", 00:17:28.535 "zoned": false, 00:17:28.535 "supported_io_types": { 00:17:28.535 "read": true, 00:17:28.536 "write": true, 00:17:28.536 "unmap": true, 00:17:28.536 "flush": true, 00:17:28.536 "reset": true, 00:17:28.536 "nvme_admin": false, 00:17:28.536 "nvme_io": false, 00:17:28.536 "nvme_io_md": false, 00:17:28.536 "write_zeroes": true, 00:17:28.536 "zcopy": true, 00:17:28.536 "get_zone_info": false, 00:17:28.536 "zone_management": false, 00:17:28.536 "zone_append": false, 00:17:28.536 "compare": false, 00:17:28.536 "compare_and_write": false, 00:17:28.536 "abort": true, 00:17:28.536 "seek_hole": false, 00:17:28.536 "seek_data": false, 00:17:28.536 "copy": true, 00:17:28.536 "nvme_iov_md": false 00:17:28.536 }, 00:17:28.536 "memory_domains": [ 00:17:28.536 { 00:17:28.536 "dma_device_id": "system", 00:17:28.536 "dma_device_type": 1 00:17:28.536 }, 00:17:28.536 { 00:17:28.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.536 "dma_device_type": 2 00:17:28.536 } 00:17:28.536 ], 00:17:28.536 "driver_specific": {} 00:17:28.536 } 00:17:28.536 ] 00:17:28.536 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.536 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:28.536 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:28.536 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:28.536 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:28.536 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:28.536 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:28.536 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:28.536 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.536 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.536 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.536 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.536 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.536 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:28.536 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.536 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.536 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.536 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.536 "name": "Existed_Raid", 00:17:28.536 "uuid": "be1e8b10-2eab-4e47-9c67-dd94ed7ca68e", 00:17:28.536 "strip_size_kb": 64, 00:17:28.536 "state": "online", 00:17:28.536 "raid_level": "raid5f", 00:17:28.536 "superblock": false, 00:17:28.536 "num_base_bdevs": 4, 00:17:28.536 "num_base_bdevs_discovered": 4, 00:17:28.536 "num_base_bdevs_operational": 4, 00:17:28.536 "base_bdevs_list": [ 00:17:28.536 { 00:17:28.536 "name": "NewBaseBdev", 00:17:28.536 "uuid": "b540d7c9-17e0-491f-b754-922deacef796", 00:17:28.536 "is_configured": true, 00:17:28.536 "data_offset": 0, 00:17:28.536 "data_size": 65536 00:17:28.536 }, 00:17:28.536 { 00:17:28.536 "name": "BaseBdev2", 00:17:28.536 "uuid": "0e5d2567-819f-49e7-bc5b-66a4f53c3e28", 00:17:28.536 "is_configured": true, 00:17:28.536 "data_offset": 0, 00:17:28.536 "data_size": 65536 00:17:28.536 }, 00:17:28.536 { 00:17:28.536 "name": "BaseBdev3", 00:17:28.536 "uuid": "ad9d6589-4532-46c3-89dd-3c9554fcf3f2", 00:17:28.536 "is_configured": true, 00:17:28.536 "data_offset": 0, 00:17:28.536 "data_size": 65536 00:17:28.536 }, 00:17:28.536 { 00:17:28.536 "name": "BaseBdev4", 00:17:28.536 "uuid": "5fa13c69-d227-4088-8d95-b1a57cb0d3b2", 00:17:28.536 "is_configured": true, 00:17:28.536 "data_offset": 0, 00:17:28.536 "data_size": 65536 00:17:28.536 } 00:17:28.536 ] 00:17:28.536 }' 00:17:28.536 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.536 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.795 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:28.795 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:28.795 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:28.795 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:28.795 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:28.795 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:28.795 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:28.795 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.795 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.795 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:28.795 [2024-12-06 16:33:10.594975] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:28.795 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.795 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:28.795 "name": "Existed_Raid", 00:17:28.795 "aliases": [ 00:17:28.795 "be1e8b10-2eab-4e47-9c67-dd94ed7ca68e" 00:17:28.795 ], 00:17:28.795 "product_name": "Raid Volume", 00:17:28.795 "block_size": 512, 00:17:28.795 "num_blocks": 196608, 00:17:28.795 "uuid": "be1e8b10-2eab-4e47-9c67-dd94ed7ca68e", 00:17:28.795 "assigned_rate_limits": { 00:17:28.795 "rw_ios_per_sec": 0, 00:17:28.795 "rw_mbytes_per_sec": 0, 00:17:28.795 "r_mbytes_per_sec": 0, 00:17:28.795 "w_mbytes_per_sec": 0 00:17:28.795 }, 00:17:28.795 "claimed": false, 00:17:28.795 "zoned": false, 00:17:28.795 "supported_io_types": { 00:17:28.795 "read": true, 00:17:28.795 "write": true, 00:17:28.795 "unmap": false, 00:17:28.795 "flush": false, 00:17:28.795 "reset": true, 00:17:28.795 "nvme_admin": false, 00:17:28.795 "nvme_io": false, 00:17:28.795 "nvme_io_md": false, 00:17:28.795 "write_zeroes": true, 00:17:28.795 "zcopy": false, 00:17:28.795 "get_zone_info": false, 00:17:28.795 "zone_management": false, 00:17:28.795 "zone_append": false, 00:17:28.795 "compare": false, 00:17:28.795 "compare_and_write": false, 00:17:28.795 "abort": false, 00:17:28.795 "seek_hole": false, 00:17:28.795 "seek_data": false, 00:17:28.795 "copy": false, 00:17:28.795 "nvme_iov_md": false 00:17:28.795 }, 00:17:28.795 "driver_specific": { 00:17:28.795 "raid": { 00:17:28.795 "uuid": "be1e8b10-2eab-4e47-9c67-dd94ed7ca68e", 00:17:28.795 "strip_size_kb": 64, 00:17:28.795 "state": "online", 00:17:28.795 "raid_level": "raid5f", 00:17:28.795 "superblock": false, 00:17:28.795 "num_base_bdevs": 4, 00:17:28.795 "num_base_bdevs_discovered": 4, 00:17:28.795 "num_base_bdevs_operational": 4, 00:17:28.795 "base_bdevs_list": [ 00:17:28.795 { 00:17:28.795 "name": "NewBaseBdev", 00:17:28.795 "uuid": "b540d7c9-17e0-491f-b754-922deacef796", 00:17:28.795 "is_configured": true, 00:17:28.795 "data_offset": 0, 00:17:28.795 "data_size": 65536 00:17:28.795 }, 00:17:28.795 { 00:17:28.795 "name": "BaseBdev2", 00:17:28.795 "uuid": "0e5d2567-819f-49e7-bc5b-66a4f53c3e28", 00:17:28.795 "is_configured": true, 00:17:28.795 "data_offset": 0, 00:17:28.795 "data_size": 65536 00:17:28.795 }, 00:17:28.795 { 00:17:28.795 "name": "BaseBdev3", 00:17:28.795 "uuid": "ad9d6589-4532-46c3-89dd-3c9554fcf3f2", 00:17:28.795 "is_configured": true, 00:17:28.795 "data_offset": 0, 00:17:28.795 "data_size": 65536 00:17:28.795 }, 00:17:28.795 { 00:17:28.795 "name": "BaseBdev4", 00:17:28.795 "uuid": "5fa13c69-d227-4088-8d95-b1a57cb0d3b2", 00:17:28.795 "is_configured": true, 00:17:28.795 "data_offset": 0, 00:17:28.795 "data_size": 65536 00:17:28.795 } 00:17:28.795 ] 00:17:28.795 } 00:17:28.795 } 00:17:28.795 }' 00:17:28.795 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:29.054 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:29.054 BaseBdev2 00:17:29.054 BaseBdev3 00:17:29.054 BaseBdev4' 00:17:29.054 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:29.054 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:29.054 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:29.054 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:29.054 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:29.054 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.054 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.054 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.054 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:29.054 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:29.054 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:29.054 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:29.054 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:29.054 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.054 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.054 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.054 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:29.054 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:29.054 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:29.054 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:29.054 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:29.054 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.054 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.054 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.054 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:29.054 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:29.054 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:29.054 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:29.054 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:29.054 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.054 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.054 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.054 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:29.054 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:29.054 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:29.054 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.054 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.054 [2024-12-06 16:33:10.878268] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:29.054 [2024-12-06 16:33:10.878298] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:29.054 [2024-12-06 16:33:10.878375] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:29.054 [2024-12-06 16:33:10.878642] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:29.054 [2024-12-06 16:33:10.878654] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:17:29.054 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.054 16:33:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 93735 00:17:29.054 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 93735 ']' 00:17:29.054 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 93735 00:17:29.054 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:17:29.313 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:29.313 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93735 00:17:29.313 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:29.313 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:29.313 killing process with pid 93735 00:17:29.313 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93735' 00:17:29.313 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 93735 00:17:29.313 [2024-12-06 16:33:10.927627] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:29.313 16:33:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 93735 00:17:29.313 [2024-12-06 16:33:10.968231] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:29.572 16:33:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:17:29.572 ************************************ 00:17:29.572 END TEST raid5f_state_function_test 00:17:29.572 00:17:29.572 real 0m9.766s 00:17:29.572 user 0m16.667s 00:17:29.572 sys 0m2.108s 00:17:29.572 16:33:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:29.572 16:33:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.572 ************************************ 00:17:29.572 16:33:11 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:17:29.572 16:33:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:29.572 16:33:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:29.572 16:33:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:29.572 ************************************ 00:17:29.572 START TEST raid5f_state_function_test_sb 00:17:29.572 ************************************ 00:17:29.572 16:33:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:17:29.572 16:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:17:29.572 16:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:17:29.572 16:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:29.572 16:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:29.572 16:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:29.572 16:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:29.572 16:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:29.572 16:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:29.572 16:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:29.572 16:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:29.572 16:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:29.572 16:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:29.572 16:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:29.572 16:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:29.572 16:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:29.572 16:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:17:29.572 16:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:29.572 16:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:29.572 16:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:29.572 16:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:29.572 16:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:29.572 16:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:29.572 16:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:29.572 16:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:29.572 16:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:17:29.572 16:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:29.572 16:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:29.572 16:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:29.572 16:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:29.572 16:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=94390 00:17:29.572 16:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:29.572 16:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 94390' 00:17:29.572 Process raid pid: 94390 00:17:29.572 16:33:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 94390 00:17:29.572 16:33:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 94390 ']' 00:17:29.572 16:33:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.572 16:33:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:29.572 16:33:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.572 16:33:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:29.572 16:33:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.572 [2024-12-06 16:33:11.362708] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:17:29.572 [2024-12-06 16:33:11.362841] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:29.830 [2024-12-06 16:33:11.535101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.830 [2024-12-06 16:33:11.560525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.830 [2024-12-06 16:33:11.603495] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:29.830 [2024-12-06 16:33:11.603539] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:30.398 16:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:30.398 16:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:30.398 16:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:30.398 16:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.398 16:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.398 [2024-12-06 16:33:12.198455] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:30.398 [2024-12-06 16:33:12.198586] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:30.398 [2024-12-06 16:33:12.198608] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:30.398 [2024-12-06 16:33:12.198621] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:30.398 [2024-12-06 16:33:12.198627] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:30.398 [2024-12-06 16:33:12.198638] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:30.398 [2024-12-06 16:33:12.198644] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:30.398 [2024-12-06 16:33:12.198654] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:30.398 16:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.398 16:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:30.398 16:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:30.398 16:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:30.398 16:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:30.398 16:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:30.398 16:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:30.398 16:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.398 16:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.398 16:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.398 16:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.398 16:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.398 16:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:30.398 16:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.398 16:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.398 16:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.656 16:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.656 "name": "Existed_Raid", 00:17:30.656 "uuid": "d6893e6e-97e8-4256-888c-c409681216a4", 00:17:30.656 "strip_size_kb": 64, 00:17:30.656 "state": "configuring", 00:17:30.656 "raid_level": "raid5f", 00:17:30.656 "superblock": true, 00:17:30.656 "num_base_bdevs": 4, 00:17:30.656 "num_base_bdevs_discovered": 0, 00:17:30.656 "num_base_bdevs_operational": 4, 00:17:30.656 "base_bdevs_list": [ 00:17:30.656 { 00:17:30.656 "name": "BaseBdev1", 00:17:30.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.656 "is_configured": false, 00:17:30.656 "data_offset": 0, 00:17:30.656 "data_size": 0 00:17:30.656 }, 00:17:30.656 { 00:17:30.656 "name": "BaseBdev2", 00:17:30.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.656 "is_configured": false, 00:17:30.656 "data_offset": 0, 00:17:30.656 "data_size": 0 00:17:30.656 }, 00:17:30.656 { 00:17:30.656 "name": "BaseBdev3", 00:17:30.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.656 "is_configured": false, 00:17:30.656 "data_offset": 0, 00:17:30.656 "data_size": 0 00:17:30.656 }, 00:17:30.656 { 00:17:30.656 "name": "BaseBdev4", 00:17:30.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.656 "is_configured": false, 00:17:30.656 "data_offset": 0, 00:17:30.656 "data_size": 0 00:17:30.656 } 00:17:30.656 ] 00:17:30.656 }' 00:17:30.656 16:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.656 16:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.915 16:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:30.915 16:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.915 16:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.915 [2024-12-06 16:33:12.689470] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:30.915 [2024-12-06 16:33:12.689568] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:17:30.915 16:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.915 16:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:30.915 16:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.915 16:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.915 [2024-12-06 16:33:12.701462] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:30.915 [2024-12-06 16:33:12.701535] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:30.915 [2024-12-06 16:33:12.701579] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:30.915 [2024-12-06 16:33:12.701602] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:30.915 [2024-12-06 16:33:12.701621] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:30.915 [2024-12-06 16:33:12.701643] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:30.915 [2024-12-06 16:33:12.701661] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:30.915 [2024-12-06 16:33:12.701682] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:30.915 16:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.915 16:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:30.915 16:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.915 16:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.915 [2024-12-06 16:33:12.722316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:30.915 BaseBdev1 00:17:30.915 16:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.915 16:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:30.915 16:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:30.915 16:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:30.915 16:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:30.915 16:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:30.915 16:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:30.915 16:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:30.915 16:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.915 16:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.915 16:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.915 16:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:30.915 16:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.915 16:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.915 [ 00:17:30.915 { 00:17:30.915 "name": "BaseBdev1", 00:17:30.915 "aliases": [ 00:17:30.915 "38e51909-887b-4c20-9776-61e606efc1c4" 00:17:30.915 ], 00:17:30.915 "product_name": "Malloc disk", 00:17:30.915 "block_size": 512, 00:17:30.915 "num_blocks": 65536, 00:17:30.915 "uuid": "38e51909-887b-4c20-9776-61e606efc1c4", 00:17:30.915 "assigned_rate_limits": { 00:17:30.915 "rw_ios_per_sec": 0, 00:17:30.915 "rw_mbytes_per_sec": 0, 00:17:30.915 "r_mbytes_per_sec": 0, 00:17:30.915 "w_mbytes_per_sec": 0 00:17:30.915 }, 00:17:30.915 "claimed": true, 00:17:30.915 "claim_type": "exclusive_write", 00:17:31.176 "zoned": false, 00:17:31.176 "supported_io_types": { 00:17:31.176 "read": true, 00:17:31.176 "write": true, 00:17:31.176 "unmap": true, 00:17:31.176 "flush": true, 00:17:31.176 "reset": true, 00:17:31.176 "nvme_admin": false, 00:17:31.176 "nvme_io": false, 00:17:31.176 "nvme_io_md": false, 00:17:31.176 "write_zeroes": true, 00:17:31.176 "zcopy": true, 00:17:31.176 "get_zone_info": false, 00:17:31.176 "zone_management": false, 00:17:31.176 "zone_append": false, 00:17:31.176 "compare": false, 00:17:31.176 "compare_and_write": false, 00:17:31.176 "abort": true, 00:17:31.176 "seek_hole": false, 00:17:31.176 "seek_data": false, 00:17:31.176 "copy": true, 00:17:31.176 "nvme_iov_md": false 00:17:31.176 }, 00:17:31.176 "memory_domains": [ 00:17:31.176 { 00:17:31.176 "dma_device_id": "system", 00:17:31.176 "dma_device_type": 1 00:17:31.176 }, 00:17:31.176 { 00:17:31.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.176 "dma_device_type": 2 00:17:31.176 } 00:17:31.176 ], 00:17:31.176 "driver_specific": {} 00:17:31.176 } 00:17:31.176 ] 00:17:31.176 16:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.176 16:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:31.176 16:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:31.176 16:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:31.176 16:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:31.176 16:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:31.176 16:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:31.176 16:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:31.176 16:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.176 16:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.176 16:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.176 16:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.176 16:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.176 16:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:31.176 16:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.176 16:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.176 16:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.176 16:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.176 "name": "Existed_Raid", 00:17:31.176 "uuid": "7763953f-bf72-4920-8ef9-adaa64cbb9e5", 00:17:31.176 "strip_size_kb": 64, 00:17:31.176 "state": "configuring", 00:17:31.176 "raid_level": "raid5f", 00:17:31.176 "superblock": true, 00:17:31.176 "num_base_bdevs": 4, 00:17:31.176 "num_base_bdevs_discovered": 1, 00:17:31.176 "num_base_bdevs_operational": 4, 00:17:31.176 "base_bdevs_list": [ 00:17:31.176 { 00:17:31.176 "name": "BaseBdev1", 00:17:31.176 "uuid": "38e51909-887b-4c20-9776-61e606efc1c4", 00:17:31.176 "is_configured": true, 00:17:31.176 "data_offset": 2048, 00:17:31.176 "data_size": 63488 00:17:31.176 }, 00:17:31.176 { 00:17:31.176 "name": "BaseBdev2", 00:17:31.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.176 "is_configured": false, 00:17:31.176 "data_offset": 0, 00:17:31.176 "data_size": 0 00:17:31.176 }, 00:17:31.176 { 00:17:31.176 "name": "BaseBdev3", 00:17:31.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.176 "is_configured": false, 00:17:31.176 "data_offset": 0, 00:17:31.176 "data_size": 0 00:17:31.176 }, 00:17:31.176 { 00:17:31.176 "name": "BaseBdev4", 00:17:31.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.177 "is_configured": false, 00:17:31.177 "data_offset": 0, 00:17:31.177 "data_size": 0 00:17:31.177 } 00:17:31.177 ] 00:17:31.177 }' 00:17:31.177 16:33:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.177 16:33:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.438 16:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:31.438 16:33:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.438 16:33:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.438 [2024-12-06 16:33:13.209556] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:31.438 [2024-12-06 16:33:13.209657] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:17:31.438 16:33:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.439 16:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:31.439 16:33:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.439 16:33:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.439 [2024-12-06 16:33:13.221586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:31.439 [2024-12-06 16:33:13.223588] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:31.439 [2024-12-06 16:33:13.223631] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:31.439 [2024-12-06 16:33:13.223641] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:31.439 [2024-12-06 16:33:13.223650] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:31.439 [2024-12-06 16:33:13.223657] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:31.439 [2024-12-06 16:33:13.223666] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:31.439 16:33:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.439 16:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:31.439 16:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:31.439 16:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:31.439 16:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:31.439 16:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:31.439 16:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:31.439 16:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:31.439 16:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:31.439 16:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.439 16:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.439 16:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.439 16:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.439 16:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.439 16:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:31.439 16:33:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.439 16:33:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.439 16:33:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.699 16:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.699 "name": "Existed_Raid", 00:17:31.699 "uuid": "a5f71702-6be9-486b-8dfe-1711e9c0b624", 00:17:31.699 "strip_size_kb": 64, 00:17:31.699 "state": "configuring", 00:17:31.699 "raid_level": "raid5f", 00:17:31.699 "superblock": true, 00:17:31.699 "num_base_bdevs": 4, 00:17:31.699 "num_base_bdevs_discovered": 1, 00:17:31.699 "num_base_bdevs_operational": 4, 00:17:31.699 "base_bdevs_list": [ 00:17:31.699 { 00:17:31.699 "name": "BaseBdev1", 00:17:31.699 "uuid": "38e51909-887b-4c20-9776-61e606efc1c4", 00:17:31.699 "is_configured": true, 00:17:31.699 "data_offset": 2048, 00:17:31.699 "data_size": 63488 00:17:31.699 }, 00:17:31.699 { 00:17:31.699 "name": "BaseBdev2", 00:17:31.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.699 "is_configured": false, 00:17:31.699 "data_offset": 0, 00:17:31.699 "data_size": 0 00:17:31.699 }, 00:17:31.699 { 00:17:31.699 "name": "BaseBdev3", 00:17:31.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.699 "is_configured": false, 00:17:31.699 "data_offset": 0, 00:17:31.699 "data_size": 0 00:17:31.699 }, 00:17:31.699 { 00:17:31.699 "name": "BaseBdev4", 00:17:31.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.699 "is_configured": false, 00:17:31.699 "data_offset": 0, 00:17:31.699 "data_size": 0 00:17:31.699 } 00:17:31.699 ] 00:17:31.699 }' 00:17:31.699 16:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.699 16:33:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.957 16:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:31.957 16:33:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.957 16:33:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.957 [2024-12-06 16:33:13.691850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:31.957 BaseBdev2 00:17:31.957 16:33:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.957 16:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:31.957 16:33:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:31.957 16:33:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:31.957 16:33:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:31.957 16:33:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:31.957 16:33:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:31.957 16:33:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:31.957 16:33:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.957 16:33:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.957 16:33:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.957 16:33:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:31.957 16:33:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.957 16:33:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.957 [ 00:17:31.957 { 00:17:31.957 "name": "BaseBdev2", 00:17:31.957 "aliases": [ 00:17:31.957 "8c9387d4-d574-43ba-abfc-345ff68f9b3c" 00:17:31.957 ], 00:17:31.957 "product_name": "Malloc disk", 00:17:31.957 "block_size": 512, 00:17:31.957 "num_blocks": 65536, 00:17:31.957 "uuid": "8c9387d4-d574-43ba-abfc-345ff68f9b3c", 00:17:31.957 "assigned_rate_limits": { 00:17:31.957 "rw_ios_per_sec": 0, 00:17:31.957 "rw_mbytes_per_sec": 0, 00:17:31.957 "r_mbytes_per_sec": 0, 00:17:31.957 "w_mbytes_per_sec": 0 00:17:31.957 }, 00:17:31.957 "claimed": true, 00:17:31.957 "claim_type": "exclusive_write", 00:17:31.957 "zoned": false, 00:17:31.957 "supported_io_types": { 00:17:31.957 "read": true, 00:17:31.957 "write": true, 00:17:31.957 "unmap": true, 00:17:31.957 "flush": true, 00:17:31.957 "reset": true, 00:17:31.957 "nvme_admin": false, 00:17:31.957 "nvme_io": false, 00:17:31.957 "nvme_io_md": false, 00:17:31.957 "write_zeroes": true, 00:17:31.957 "zcopy": true, 00:17:31.957 "get_zone_info": false, 00:17:31.958 "zone_management": false, 00:17:31.958 "zone_append": false, 00:17:31.958 "compare": false, 00:17:31.958 "compare_and_write": false, 00:17:31.958 "abort": true, 00:17:31.958 "seek_hole": false, 00:17:31.958 "seek_data": false, 00:17:31.958 "copy": true, 00:17:31.958 "nvme_iov_md": false 00:17:31.958 }, 00:17:31.958 "memory_domains": [ 00:17:31.958 { 00:17:31.958 "dma_device_id": "system", 00:17:31.958 "dma_device_type": 1 00:17:31.958 }, 00:17:31.958 { 00:17:31.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.958 "dma_device_type": 2 00:17:31.958 } 00:17:31.958 ], 00:17:31.958 "driver_specific": {} 00:17:31.958 } 00:17:31.958 ] 00:17:31.958 16:33:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.958 16:33:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:31.958 16:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:31.958 16:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:31.958 16:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:31.958 16:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:31.958 16:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:31.958 16:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:31.958 16:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:31.958 16:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:31.958 16:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.958 16:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.958 16:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.958 16:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.958 16:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.958 16:33:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.958 16:33:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.958 16:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:31.958 16:33:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.958 16:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.958 "name": "Existed_Raid", 00:17:31.958 "uuid": "a5f71702-6be9-486b-8dfe-1711e9c0b624", 00:17:31.958 "strip_size_kb": 64, 00:17:31.958 "state": "configuring", 00:17:31.958 "raid_level": "raid5f", 00:17:31.958 "superblock": true, 00:17:31.958 "num_base_bdevs": 4, 00:17:31.958 "num_base_bdevs_discovered": 2, 00:17:31.958 "num_base_bdevs_operational": 4, 00:17:31.958 "base_bdevs_list": [ 00:17:31.958 { 00:17:31.958 "name": "BaseBdev1", 00:17:31.958 "uuid": "38e51909-887b-4c20-9776-61e606efc1c4", 00:17:31.958 "is_configured": true, 00:17:31.958 "data_offset": 2048, 00:17:31.958 "data_size": 63488 00:17:31.958 }, 00:17:31.958 { 00:17:31.958 "name": "BaseBdev2", 00:17:31.958 "uuid": "8c9387d4-d574-43ba-abfc-345ff68f9b3c", 00:17:31.958 "is_configured": true, 00:17:31.958 "data_offset": 2048, 00:17:31.958 "data_size": 63488 00:17:31.958 }, 00:17:31.958 { 00:17:31.958 "name": "BaseBdev3", 00:17:31.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.958 "is_configured": false, 00:17:31.958 "data_offset": 0, 00:17:31.958 "data_size": 0 00:17:31.958 }, 00:17:31.958 { 00:17:31.958 "name": "BaseBdev4", 00:17:31.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.958 "is_configured": false, 00:17:31.958 "data_offset": 0, 00:17:31.958 "data_size": 0 00:17:31.958 } 00:17:31.958 ] 00:17:31.958 }' 00:17:31.958 16:33:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.958 16:33:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.525 16:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:32.525 16:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.525 16:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.525 [2024-12-06 16:33:14.229110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:32.525 BaseBdev3 00:17:32.525 16:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.525 16:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:32.525 16:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:32.525 16:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:32.525 16:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:32.525 16:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:32.525 16:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:32.525 16:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:32.525 16:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.525 16:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.525 16:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.525 16:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:32.525 16:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.525 16:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.525 [ 00:17:32.525 { 00:17:32.525 "name": "BaseBdev3", 00:17:32.525 "aliases": [ 00:17:32.525 "02519a3c-7307-4a71-af4d-03f42a3b6a8d" 00:17:32.525 ], 00:17:32.525 "product_name": "Malloc disk", 00:17:32.525 "block_size": 512, 00:17:32.525 "num_blocks": 65536, 00:17:32.525 "uuid": "02519a3c-7307-4a71-af4d-03f42a3b6a8d", 00:17:32.525 "assigned_rate_limits": { 00:17:32.525 "rw_ios_per_sec": 0, 00:17:32.525 "rw_mbytes_per_sec": 0, 00:17:32.525 "r_mbytes_per_sec": 0, 00:17:32.525 "w_mbytes_per_sec": 0 00:17:32.525 }, 00:17:32.525 "claimed": true, 00:17:32.525 "claim_type": "exclusive_write", 00:17:32.525 "zoned": false, 00:17:32.525 "supported_io_types": { 00:17:32.525 "read": true, 00:17:32.525 "write": true, 00:17:32.525 "unmap": true, 00:17:32.525 "flush": true, 00:17:32.525 "reset": true, 00:17:32.525 "nvme_admin": false, 00:17:32.525 "nvme_io": false, 00:17:32.525 "nvme_io_md": false, 00:17:32.525 "write_zeroes": true, 00:17:32.525 "zcopy": true, 00:17:32.525 "get_zone_info": false, 00:17:32.525 "zone_management": false, 00:17:32.525 "zone_append": false, 00:17:32.525 "compare": false, 00:17:32.525 "compare_and_write": false, 00:17:32.525 "abort": true, 00:17:32.525 "seek_hole": false, 00:17:32.525 "seek_data": false, 00:17:32.525 "copy": true, 00:17:32.525 "nvme_iov_md": false 00:17:32.525 }, 00:17:32.525 "memory_domains": [ 00:17:32.525 { 00:17:32.525 "dma_device_id": "system", 00:17:32.525 "dma_device_type": 1 00:17:32.525 }, 00:17:32.525 { 00:17:32.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.525 "dma_device_type": 2 00:17:32.525 } 00:17:32.525 ], 00:17:32.525 "driver_specific": {} 00:17:32.525 } 00:17:32.525 ] 00:17:32.525 16:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.525 16:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:32.525 16:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:32.525 16:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:32.525 16:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:32.525 16:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:32.526 16:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:32.526 16:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:32.526 16:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:32.526 16:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:32.526 16:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.526 16:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.526 16:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.526 16:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.526 16:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.526 16:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:32.526 16:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.526 16:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.526 16:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.526 16:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.526 "name": "Existed_Raid", 00:17:32.526 "uuid": "a5f71702-6be9-486b-8dfe-1711e9c0b624", 00:17:32.526 "strip_size_kb": 64, 00:17:32.526 "state": "configuring", 00:17:32.526 "raid_level": "raid5f", 00:17:32.526 "superblock": true, 00:17:32.526 "num_base_bdevs": 4, 00:17:32.526 "num_base_bdevs_discovered": 3, 00:17:32.526 "num_base_bdevs_operational": 4, 00:17:32.526 "base_bdevs_list": [ 00:17:32.526 { 00:17:32.526 "name": "BaseBdev1", 00:17:32.526 "uuid": "38e51909-887b-4c20-9776-61e606efc1c4", 00:17:32.526 "is_configured": true, 00:17:32.526 "data_offset": 2048, 00:17:32.526 "data_size": 63488 00:17:32.526 }, 00:17:32.526 { 00:17:32.526 "name": "BaseBdev2", 00:17:32.526 "uuid": "8c9387d4-d574-43ba-abfc-345ff68f9b3c", 00:17:32.526 "is_configured": true, 00:17:32.526 "data_offset": 2048, 00:17:32.526 "data_size": 63488 00:17:32.526 }, 00:17:32.526 { 00:17:32.526 "name": "BaseBdev3", 00:17:32.526 "uuid": "02519a3c-7307-4a71-af4d-03f42a3b6a8d", 00:17:32.526 "is_configured": true, 00:17:32.526 "data_offset": 2048, 00:17:32.526 "data_size": 63488 00:17:32.526 }, 00:17:32.526 { 00:17:32.526 "name": "BaseBdev4", 00:17:32.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.526 "is_configured": false, 00:17:32.526 "data_offset": 0, 00:17:32.526 "data_size": 0 00:17:32.526 } 00:17:32.526 ] 00:17:32.526 }' 00:17:32.526 16:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.526 16:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.095 16:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:33.095 16:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.095 16:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.095 [2024-12-06 16:33:14.751406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:33.095 [2024-12-06 16:33:14.751739] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:17:33.095 [2024-12-06 16:33:14.751796] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:33.095 BaseBdev4 00:17:33.095 [2024-12-06 16:33:14.752117] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:33.095 [2024-12-06 16:33:14.752626] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:17:33.095 [2024-12-06 16:33:14.752685] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:17:33.095 [2024-12-06 16:33:14.752857] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:33.095 16:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.095 16:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:17:33.095 16:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:33.095 16:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:33.095 16:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:33.095 16:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:33.095 16:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:33.095 16:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:33.095 16:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.095 16:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.095 16:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.095 16:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:33.095 16:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.095 16:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.095 [ 00:17:33.095 { 00:17:33.095 "name": "BaseBdev4", 00:17:33.095 "aliases": [ 00:17:33.095 "45cbf01e-926f-4db7-b2a2-3dc4f60a86eb" 00:17:33.095 ], 00:17:33.095 "product_name": "Malloc disk", 00:17:33.095 "block_size": 512, 00:17:33.095 "num_blocks": 65536, 00:17:33.095 "uuid": "45cbf01e-926f-4db7-b2a2-3dc4f60a86eb", 00:17:33.095 "assigned_rate_limits": { 00:17:33.095 "rw_ios_per_sec": 0, 00:17:33.095 "rw_mbytes_per_sec": 0, 00:17:33.095 "r_mbytes_per_sec": 0, 00:17:33.095 "w_mbytes_per_sec": 0 00:17:33.095 }, 00:17:33.095 "claimed": true, 00:17:33.095 "claim_type": "exclusive_write", 00:17:33.095 "zoned": false, 00:17:33.095 "supported_io_types": { 00:17:33.095 "read": true, 00:17:33.095 "write": true, 00:17:33.095 "unmap": true, 00:17:33.095 "flush": true, 00:17:33.095 "reset": true, 00:17:33.095 "nvme_admin": false, 00:17:33.095 "nvme_io": false, 00:17:33.095 "nvme_io_md": false, 00:17:33.095 "write_zeroes": true, 00:17:33.095 "zcopy": true, 00:17:33.095 "get_zone_info": false, 00:17:33.095 "zone_management": false, 00:17:33.095 "zone_append": false, 00:17:33.095 "compare": false, 00:17:33.095 "compare_and_write": false, 00:17:33.095 "abort": true, 00:17:33.095 "seek_hole": false, 00:17:33.095 "seek_data": false, 00:17:33.095 "copy": true, 00:17:33.095 "nvme_iov_md": false 00:17:33.095 }, 00:17:33.095 "memory_domains": [ 00:17:33.095 { 00:17:33.095 "dma_device_id": "system", 00:17:33.095 "dma_device_type": 1 00:17:33.095 }, 00:17:33.095 { 00:17:33.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:33.095 "dma_device_type": 2 00:17:33.095 } 00:17:33.095 ], 00:17:33.095 "driver_specific": {} 00:17:33.095 } 00:17:33.095 ] 00:17:33.095 16:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.095 16:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:33.095 16:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:33.095 16:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:33.095 16:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:33.095 16:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:33.095 16:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:33.095 16:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:33.095 16:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:33.095 16:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:33.095 16:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.095 16:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.096 16:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.096 16:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.096 16:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.096 16:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.096 16:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.096 16:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:33.096 16:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.096 16:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.096 "name": "Existed_Raid", 00:17:33.096 "uuid": "a5f71702-6be9-486b-8dfe-1711e9c0b624", 00:17:33.096 "strip_size_kb": 64, 00:17:33.096 "state": "online", 00:17:33.096 "raid_level": "raid5f", 00:17:33.096 "superblock": true, 00:17:33.096 "num_base_bdevs": 4, 00:17:33.096 "num_base_bdevs_discovered": 4, 00:17:33.096 "num_base_bdevs_operational": 4, 00:17:33.096 "base_bdevs_list": [ 00:17:33.096 { 00:17:33.096 "name": "BaseBdev1", 00:17:33.096 "uuid": "38e51909-887b-4c20-9776-61e606efc1c4", 00:17:33.096 "is_configured": true, 00:17:33.096 "data_offset": 2048, 00:17:33.096 "data_size": 63488 00:17:33.096 }, 00:17:33.096 { 00:17:33.096 "name": "BaseBdev2", 00:17:33.096 "uuid": "8c9387d4-d574-43ba-abfc-345ff68f9b3c", 00:17:33.096 "is_configured": true, 00:17:33.096 "data_offset": 2048, 00:17:33.096 "data_size": 63488 00:17:33.096 }, 00:17:33.096 { 00:17:33.096 "name": "BaseBdev3", 00:17:33.096 "uuid": "02519a3c-7307-4a71-af4d-03f42a3b6a8d", 00:17:33.096 "is_configured": true, 00:17:33.096 "data_offset": 2048, 00:17:33.096 "data_size": 63488 00:17:33.096 }, 00:17:33.096 { 00:17:33.096 "name": "BaseBdev4", 00:17:33.096 "uuid": "45cbf01e-926f-4db7-b2a2-3dc4f60a86eb", 00:17:33.096 "is_configured": true, 00:17:33.096 "data_offset": 2048, 00:17:33.096 "data_size": 63488 00:17:33.096 } 00:17:33.096 ] 00:17:33.096 }' 00:17:33.096 16:33:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.096 16:33:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.665 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:33.665 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:33.665 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:33.665 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:33.665 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:33.665 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:33.665 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:33.665 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:33.665 16:33:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.665 16:33:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.665 [2024-12-06 16:33:15.262824] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:33.665 16:33:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.665 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:33.665 "name": "Existed_Raid", 00:17:33.665 "aliases": [ 00:17:33.665 "a5f71702-6be9-486b-8dfe-1711e9c0b624" 00:17:33.665 ], 00:17:33.665 "product_name": "Raid Volume", 00:17:33.665 "block_size": 512, 00:17:33.665 "num_blocks": 190464, 00:17:33.665 "uuid": "a5f71702-6be9-486b-8dfe-1711e9c0b624", 00:17:33.665 "assigned_rate_limits": { 00:17:33.665 "rw_ios_per_sec": 0, 00:17:33.665 "rw_mbytes_per_sec": 0, 00:17:33.665 "r_mbytes_per_sec": 0, 00:17:33.665 "w_mbytes_per_sec": 0 00:17:33.665 }, 00:17:33.665 "claimed": false, 00:17:33.665 "zoned": false, 00:17:33.665 "supported_io_types": { 00:17:33.665 "read": true, 00:17:33.665 "write": true, 00:17:33.665 "unmap": false, 00:17:33.665 "flush": false, 00:17:33.665 "reset": true, 00:17:33.665 "nvme_admin": false, 00:17:33.665 "nvme_io": false, 00:17:33.665 "nvme_io_md": false, 00:17:33.665 "write_zeroes": true, 00:17:33.665 "zcopy": false, 00:17:33.665 "get_zone_info": false, 00:17:33.665 "zone_management": false, 00:17:33.665 "zone_append": false, 00:17:33.665 "compare": false, 00:17:33.665 "compare_and_write": false, 00:17:33.665 "abort": false, 00:17:33.665 "seek_hole": false, 00:17:33.665 "seek_data": false, 00:17:33.665 "copy": false, 00:17:33.665 "nvme_iov_md": false 00:17:33.665 }, 00:17:33.665 "driver_specific": { 00:17:33.665 "raid": { 00:17:33.665 "uuid": "a5f71702-6be9-486b-8dfe-1711e9c0b624", 00:17:33.665 "strip_size_kb": 64, 00:17:33.665 "state": "online", 00:17:33.665 "raid_level": "raid5f", 00:17:33.665 "superblock": true, 00:17:33.665 "num_base_bdevs": 4, 00:17:33.665 "num_base_bdevs_discovered": 4, 00:17:33.665 "num_base_bdevs_operational": 4, 00:17:33.665 "base_bdevs_list": [ 00:17:33.665 { 00:17:33.665 "name": "BaseBdev1", 00:17:33.665 "uuid": "38e51909-887b-4c20-9776-61e606efc1c4", 00:17:33.665 "is_configured": true, 00:17:33.665 "data_offset": 2048, 00:17:33.665 "data_size": 63488 00:17:33.665 }, 00:17:33.665 { 00:17:33.665 "name": "BaseBdev2", 00:17:33.665 "uuid": "8c9387d4-d574-43ba-abfc-345ff68f9b3c", 00:17:33.665 "is_configured": true, 00:17:33.665 "data_offset": 2048, 00:17:33.665 "data_size": 63488 00:17:33.665 }, 00:17:33.665 { 00:17:33.665 "name": "BaseBdev3", 00:17:33.665 "uuid": "02519a3c-7307-4a71-af4d-03f42a3b6a8d", 00:17:33.665 "is_configured": true, 00:17:33.665 "data_offset": 2048, 00:17:33.665 "data_size": 63488 00:17:33.665 }, 00:17:33.665 { 00:17:33.665 "name": "BaseBdev4", 00:17:33.665 "uuid": "45cbf01e-926f-4db7-b2a2-3dc4f60a86eb", 00:17:33.665 "is_configured": true, 00:17:33.665 "data_offset": 2048, 00:17:33.665 "data_size": 63488 00:17:33.665 } 00:17:33.665 ] 00:17:33.665 } 00:17:33.665 } 00:17:33.665 }' 00:17:33.665 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:33.665 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:33.665 BaseBdev2 00:17:33.665 BaseBdev3 00:17:33.665 BaseBdev4' 00:17:33.665 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.665 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:33.665 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:33.665 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:33.665 16:33:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.665 16:33:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.665 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.665 16:33:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.665 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:33.665 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:33.665 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:33.665 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.665 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:33.665 16:33:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.665 16:33:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.665 16:33:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.665 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:33.666 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:33.666 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:33.666 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.666 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:33.666 16:33:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.666 16:33:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.925 16:33:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.925 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:33.925 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:33.925 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:33.925 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:33.925 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.925 16:33:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.925 16:33:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.925 16:33:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.925 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:33.925 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:33.925 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:33.925 16:33:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.925 16:33:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.925 [2024-12-06 16:33:15.594097] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:33.925 16:33:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.925 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:33.925 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:17:33.925 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:33.925 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:17:33.925 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:33.925 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:33.925 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:33.925 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:33.925 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:33.925 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:33.925 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:33.925 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.925 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.925 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.925 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.925 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:33.925 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.925 16:33:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.925 16:33:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.925 16:33:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.925 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.925 "name": "Existed_Raid", 00:17:33.925 "uuid": "a5f71702-6be9-486b-8dfe-1711e9c0b624", 00:17:33.925 "strip_size_kb": 64, 00:17:33.925 "state": "online", 00:17:33.925 "raid_level": "raid5f", 00:17:33.925 "superblock": true, 00:17:33.925 "num_base_bdevs": 4, 00:17:33.925 "num_base_bdevs_discovered": 3, 00:17:33.925 "num_base_bdevs_operational": 3, 00:17:33.925 "base_bdevs_list": [ 00:17:33.925 { 00:17:33.925 "name": null, 00:17:33.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.925 "is_configured": false, 00:17:33.925 "data_offset": 0, 00:17:33.925 "data_size": 63488 00:17:33.925 }, 00:17:33.925 { 00:17:33.925 "name": "BaseBdev2", 00:17:33.925 "uuid": "8c9387d4-d574-43ba-abfc-345ff68f9b3c", 00:17:33.925 "is_configured": true, 00:17:33.925 "data_offset": 2048, 00:17:33.925 "data_size": 63488 00:17:33.925 }, 00:17:33.925 { 00:17:33.925 "name": "BaseBdev3", 00:17:33.925 "uuid": "02519a3c-7307-4a71-af4d-03f42a3b6a8d", 00:17:33.925 "is_configured": true, 00:17:33.925 "data_offset": 2048, 00:17:33.925 "data_size": 63488 00:17:33.925 }, 00:17:33.925 { 00:17:33.925 "name": "BaseBdev4", 00:17:33.925 "uuid": "45cbf01e-926f-4db7-b2a2-3dc4f60a86eb", 00:17:33.925 "is_configured": true, 00:17:33.925 "data_offset": 2048, 00:17:33.925 "data_size": 63488 00:17:33.925 } 00:17:33.925 ] 00:17:33.925 }' 00:17:33.925 16:33:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.925 16:33:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.185 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:34.185 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:34.185 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.185 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.185 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.185 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:34.445 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.445 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:34.446 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:34.446 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:34.446 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.446 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.446 [2024-12-06 16:33:16.072737] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:34.446 [2024-12-06 16:33:16.072921] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:34.446 [2024-12-06 16:33:16.084418] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:34.446 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.446 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:34.446 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:34.446 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.446 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:34.446 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.446 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.446 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.446 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:34.446 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:34.446 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:34.446 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.446 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.446 [2024-12-06 16:33:16.140398] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:34.446 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.446 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:34.446 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:34.446 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.446 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:34.446 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.446 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.446 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.446 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:34.446 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:34.446 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:17:34.446 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.446 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.446 [2024-12-06 16:33:16.211424] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:34.446 [2024-12-06 16:33:16.211542] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:17:34.446 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.446 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:34.446 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:34.446 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.446 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:34.446 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.446 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.446 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.446 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:34.446 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:34.446 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:17:34.446 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:34.446 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:34.446 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:34.446 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.446 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.706 BaseBdev2 00:17:34.706 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.706 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:34.706 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:34.706 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:34.706 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:34.706 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:34.706 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:34.706 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:34.706 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.706 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.706 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.706 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:34.706 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.706 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.706 [ 00:17:34.706 { 00:17:34.706 "name": "BaseBdev2", 00:17:34.706 "aliases": [ 00:17:34.706 "5ddbe8bc-c58a-431f-bde5-a7f40563cea6" 00:17:34.706 ], 00:17:34.706 "product_name": "Malloc disk", 00:17:34.706 "block_size": 512, 00:17:34.706 "num_blocks": 65536, 00:17:34.706 "uuid": "5ddbe8bc-c58a-431f-bde5-a7f40563cea6", 00:17:34.706 "assigned_rate_limits": { 00:17:34.706 "rw_ios_per_sec": 0, 00:17:34.706 "rw_mbytes_per_sec": 0, 00:17:34.706 "r_mbytes_per_sec": 0, 00:17:34.706 "w_mbytes_per_sec": 0 00:17:34.706 }, 00:17:34.706 "claimed": false, 00:17:34.706 "zoned": false, 00:17:34.706 "supported_io_types": { 00:17:34.706 "read": true, 00:17:34.706 "write": true, 00:17:34.706 "unmap": true, 00:17:34.706 "flush": true, 00:17:34.706 "reset": true, 00:17:34.706 "nvme_admin": false, 00:17:34.706 "nvme_io": false, 00:17:34.706 "nvme_io_md": false, 00:17:34.706 "write_zeroes": true, 00:17:34.706 "zcopy": true, 00:17:34.706 "get_zone_info": false, 00:17:34.706 "zone_management": false, 00:17:34.706 "zone_append": false, 00:17:34.706 "compare": false, 00:17:34.706 "compare_and_write": false, 00:17:34.706 "abort": true, 00:17:34.706 "seek_hole": false, 00:17:34.706 "seek_data": false, 00:17:34.706 "copy": true, 00:17:34.706 "nvme_iov_md": false 00:17:34.706 }, 00:17:34.706 "memory_domains": [ 00:17:34.706 { 00:17:34.706 "dma_device_id": "system", 00:17:34.706 "dma_device_type": 1 00:17:34.706 }, 00:17:34.706 { 00:17:34.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.706 "dma_device_type": 2 00:17:34.706 } 00:17:34.706 ], 00:17:34.706 "driver_specific": {} 00:17:34.706 } 00:17:34.706 ] 00:17:34.706 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.706 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:34.706 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:34.706 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:34.706 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:34.706 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.706 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.706 BaseBdev3 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.707 [ 00:17:34.707 { 00:17:34.707 "name": "BaseBdev3", 00:17:34.707 "aliases": [ 00:17:34.707 "a55ef06b-bbc8-4a52-b6f0-5985200d06aa" 00:17:34.707 ], 00:17:34.707 "product_name": "Malloc disk", 00:17:34.707 "block_size": 512, 00:17:34.707 "num_blocks": 65536, 00:17:34.707 "uuid": "a55ef06b-bbc8-4a52-b6f0-5985200d06aa", 00:17:34.707 "assigned_rate_limits": { 00:17:34.707 "rw_ios_per_sec": 0, 00:17:34.707 "rw_mbytes_per_sec": 0, 00:17:34.707 "r_mbytes_per_sec": 0, 00:17:34.707 "w_mbytes_per_sec": 0 00:17:34.707 }, 00:17:34.707 "claimed": false, 00:17:34.707 "zoned": false, 00:17:34.707 "supported_io_types": { 00:17:34.707 "read": true, 00:17:34.707 "write": true, 00:17:34.707 "unmap": true, 00:17:34.707 "flush": true, 00:17:34.707 "reset": true, 00:17:34.707 "nvme_admin": false, 00:17:34.707 "nvme_io": false, 00:17:34.707 "nvme_io_md": false, 00:17:34.707 "write_zeroes": true, 00:17:34.707 "zcopy": true, 00:17:34.707 "get_zone_info": false, 00:17:34.707 "zone_management": false, 00:17:34.707 "zone_append": false, 00:17:34.707 "compare": false, 00:17:34.707 "compare_and_write": false, 00:17:34.707 "abort": true, 00:17:34.707 "seek_hole": false, 00:17:34.707 "seek_data": false, 00:17:34.707 "copy": true, 00:17:34.707 "nvme_iov_md": false 00:17:34.707 }, 00:17:34.707 "memory_domains": [ 00:17:34.707 { 00:17:34.707 "dma_device_id": "system", 00:17:34.707 "dma_device_type": 1 00:17:34.707 }, 00:17:34.707 { 00:17:34.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.707 "dma_device_type": 2 00:17:34.707 } 00:17:34.707 ], 00:17:34.707 "driver_specific": {} 00:17:34.707 } 00:17:34.707 ] 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.707 BaseBdev4 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.707 [ 00:17:34.707 { 00:17:34.707 "name": "BaseBdev4", 00:17:34.707 "aliases": [ 00:17:34.707 "f3380e2e-2092-4c84-ae0a-0ebb81f9fa1f" 00:17:34.707 ], 00:17:34.707 "product_name": "Malloc disk", 00:17:34.707 "block_size": 512, 00:17:34.707 "num_blocks": 65536, 00:17:34.707 "uuid": "f3380e2e-2092-4c84-ae0a-0ebb81f9fa1f", 00:17:34.707 "assigned_rate_limits": { 00:17:34.707 "rw_ios_per_sec": 0, 00:17:34.707 "rw_mbytes_per_sec": 0, 00:17:34.707 "r_mbytes_per_sec": 0, 00:17:34.707 "w_mbytes_per_sec": 0 00:17:34.707 }, 00:17:34.707 "claimed": false, 00:17:34.707 "zoned": false, 00:17:34.707 "supported_io_types": { 00:17:34.707 "read": true, 00:17:34.707 "write": true, 00:17:34.707 "unmap": true, 00:17:34.707 "flush": true, 00:17:34.707 "reset": true, 00:17:34.707 "nvme_admin": false, 00:17:34.707 "nvme_io": false, 00:17:34.707 "nvme_io_md": false, 00:17:34.707 "write_zeroes": true, 00:17:34.707 "zcopy": true, 00:17:34.707 "get_zone_info": false, 00:17:34.707 "zone_management": false, 00:17:34.707 "zone_append": false, 00:17:34.707 "compare": false, 00:17:34.707 "compare_and_write": false, 00:17:34.707 "abort": true, 00:17:34.707 "seek_hole": false, 00:17:34.707 "seek_data": false, 00:17:34.707 "copy": true, 00:17:34.707 "nvme_iov_md": false 00:17:34.707 }, 00:17:34.707 "memory_domains": [ 00:17:34.707 { 00:17:34.707 "dma_device_id": "system", 00:17:34.707 "dma_device_type": 1 00:17:34.707 }, 00:17:34.707 { 00:17:34.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.707 "dma_device_type": 2 00:17:34.707 } 00:17:34.707 ], 00:17:34.707 "driver_specific": {} 00:17:34.707 } 00:17:34.707 ] 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.707 [2024-12-06 16:33:16.465864] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:34.707 [2024-12-06 16:33:16.465952] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:34.707 [2024-12-06 16:33:16.465995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:34.707 [2024-12-06 16:33:16.468118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:34.707 [2024-12-06 16:33:16.468228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.707 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.707 "name": "Existed_Raid", 00:17:34.707 "uuid": "3ac6597a-28bc-4bb9-89eb-504c4dd3e935", 00:17:34.707 "strip_size_kb": 64, 00:17:34.707 "state": "configuring", 00:17:34.707 "raid_level": "raid5f", 00:17:34.707 "superblock": true, 00:17:34.707 "num_base_bdevs": 4, 00:17:34.707 "num_base_bdevs_discovered": 3, 00:17:34.707 "num_base_bdevs_operational": 4, 00:17:34.707 "base_bdevs_list": [ 00:17:34.707 { 00:17:34.707 "name": "BaseBdev1", 00:17:34.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.708 "is_configured": false, 00:17:34.708 "data_offset": 0, 00:17:34.708 "data_size": 0 00:17:34.708 }, 00:17:34.708 { 00:17:34.708 "name": "BaseBdev2", 00:17:34.708 "uuid": "5ddbe8bc-c58a-431f-bde5-a7f40563cea6", 00:17:34.708 "is_configured": true, 00:17:34.708 "data_offset": 2048, 00:17:34.708 "data_size": 63488 00:17:34.708 }, 00:17:34.708 { 00:17:34.708 "name": "BaseBdev3", 00:17:34.708 "uuid": "a55ef06b-bbc8-4a52-b6f0-5985200d06aa", 00:17:34.708 "is_configured": true, 00:17:34.708 "data_offset": 2048, 00:17:34.708 "data_size": 63488 00:17:34.708 }, 00:17:34.708 { 00:17:34.708 "name": "BaseBdev4", 00:17:34.708 "uuid": "f3380e2e-2092-4c84-ae0a-0ebb81f9fa1f", 00:17:34.708 "is_configured": true, 00:17:34.708 "data_offset": 2048, 00:17:34.708 "data_size": 63488 00:17:34.708 } 00:17:34.708 ] 00:17:34.708 }' 00:17:34.708 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.708 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.277 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:35.277 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.277 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.277 [2024-12-06 16:33:16.881181] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:35.277 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.278 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:35.278 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:35.278 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:35.278 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:35.278 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:35.278 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:35.278 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.278 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.278 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.278 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.278 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:35.278 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.278 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.278 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.278 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.278 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.278 "name": "Existed_Raid", 00:17:35.278 "uuid": "3ac6597a-28bc-4bb9-89eb-504c4dd3e935", 00:17:35.278 "strip_size_kb": 64, 00:17:35.278 "state": "configuring", 00:17:35.278 "raid_level": "raid5f", 00:17:35.278 "superblock": true, 00:17:35.278 "num_base_bdevs": 4, 00:17:35.278 "num_base_bdevs_discovered": 2, 00:17:35.278 "num_base_bdevs_operational": 4, 00:17:35.278 "base_bdevs_list": [ 00:17:35.278 { 00:17:35.278 "name": "BaseBdev1", 00:17:35.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.278 "is_configured": false, 00:17:35.278 "data_offset": 0, 00:17:35.278 "data_size": 0 00:17:35.278 }, 00:17:35.278 { 00:17:35.278 "name": null, 00:17:35.278 "uuid": "5ddbe8bc-c58a-431f-bde5-a7f40563cea6", 00:17:35.278 "is_configured": false, 00:17:35.278 "data_offset": 0, 00:17:35.278 "data_size": 63488 00:17:35.278 }, 00:17:35.278 { 00:17:35.278 "name": "BaseBdev3", 00:17:35.278 "uuid": "a55ef06b-bbc8-4a52-b6f0-5985200d06aa", 00:17:35.278 "is_configured": true, 00:17:35.278 "data_offset": 2048, 00:17:35.278 "data_size": 63488 00:17:35.278 }, 00:17:35.278 { 00:17:35.278 "name": "BaseBdev4", 00:17:35.278 "uuid": "f3380e2e-2092-4c84-ae0a-0ebb81f9fa1f", 00:17:35.278 "is_configured": true, 00:17:35.278 "data_offset": 2048, 00:17:35.278 "data_size": 63488 00:17:35.278 } 00:17:35.278 ] 00:17:35.278 }' 00:17:35.278 16:33:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.278 16:33:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.538 16:33:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:35.538 16:33:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.538 16:33:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.538 16:33:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.797 16:33:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.797 16:33:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:35.797 16:33:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:35.797 16:33:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.797 16:33:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.797 BaseBdev1 00:17:35.797 [2024-12-06 16:33:17.399299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:35.797 16:33:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.798 16:33:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:35.798 16:33:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:35.798 16:33:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:35.798 16:33:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:35.798 16:33:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:35.798 16:33:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:35.798 16:33:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:35.798 16:33:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.798 16:33:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.798 16:33:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.798 16:33:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:35.798 16:33:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.798 16:33:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.798 [ 00:17:35.798 { 00:17:35.798 "name": "BaseBdev1", 00:17:35.798 "aliases": [ 00:17:35.798 "17634edf-f310-4b7a-9a84-e6a4f504eddd" 00:17:35.798 ], 00:17:35.798 "product_name": "Malloc disk", 00:17:35.798 "block_size": 512, 00:17:35.798 "num_blocks": 65536, 00:17:35.798 "uuid": "17634edf-f310-4b7a-9a84-e6a4f504eddd", 00:17:35.798 "assigned_rate_limits": { 00:17:35.798 "rw_ios_per_sec": 0, 00:17:35.798 "rw_mbytes_per_sec": 0, 00:17:35.798 "r_mbytes_per_sec": 0, 00:17:35.798 "w_mbytes_per_sec": 0 00:17:35.798 }, 00:17:35.798 "claimed": true, 00:17:35.798 "claim_type": "exclusive_write", 00:17:35.798 "zoned": false, 00:17:35.798 "supported_io_types": { 00:17:35.798 "read": true, 00:17:35.798 "write": true, 00:17:35.798 "unmap": true, 00:17:35.798 "flush": true, 00:17:35.798 "reset": true, 00:17:35.798 "nvme_admin": false, 00:17:35.798 "nvme_io": false, 00:17:35.798 "nvme_io_md": false, 00:17:35.798 "write_zeroes": true, 00:17:35.798 "zcopy": true, 00:17:35.798 "get_zone_info": false, 00:17:35.798 "zone_management": false, 00:17:35.798 "zone_append": false, 00:17:35.798 "compare": false, 00:17:35.798 "compare_and_write": false, 00:17:35.798 "abort": true, 00:17:35.798 "seek_hole": false, 00:17:35.798 "seek_data": false, 00:17:35.798 "copy": true, 00:17:35.798 "nvme_iov_md": false 00:17:35.798 }, 00:17:35.798 "memory_domains": [ 00:17:35.798 { 00:17:35.798 "dma_device_id": "system", 00:17:35.798 "dma_device_type": 1 00:17:35.798 }, 00:17:35.798 { 00:17:35.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:35.798 "dma_device_type": 2 00:17:35.798 } 00:17:35.798 ], 00:17:35.798 "driver_specific": {} 00:17:35.798 } 00:17:35.798 ] 00:17:35.798 16:33:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.798 16:33:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:35.798 16:33:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:35.798 16:33:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:35.798 16:33:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:35.798 16:33:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:35.798 16:33:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:35.798 16:33:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:35.798 16:33:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.798 16:33:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.798 16:33:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.798 16:33:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.798 16:33:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:35.798 16:33:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.798 16:33:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.798 16:33:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.798 16:33:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.798 16:33:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.798 "name": "Existed_Raid", 00:17:35.798 "uuid": "3ac6597a-28bc-4bb9-89eb-504c4dd3e935", 00:17:35.798 "strip_size_kb": 64, 00:17:35.798 "state": "configuring", 00:17:35.798 "raid_level": "raid5f", 00:17:35.798 "superblock": true, 00:17:35.798 "num_base_bdevs": 4, 00:17:35.798 "num_base_bdevs_discovered": 3, 00:17:35.798 "num_base_bdevs_operational": 4, 00:17:35.798 "base_bdevs_list": [ 00:17:35.798 { 00:17:35.798 "name": "BaseBdev1", 00:17:35.798 "uuid": "17634edf-f310-4b7a-9a84-e6a4f504eddd", 00:17:35.798 "is_configured": true, 00:17:35.798 "data_offset": 2048, 00:17:35.798 "data_size": 63488 00:17:35.798 }, 00:17:35.798 { 00:17:35.798 "name": null, 00:17:35.798 "uuid": "5ddbe8bc-c58a-431f-bde5-a7f40563cea6", 00:17:35.798 "is_configured": false, 00:17:35.798 "data_offset": 0, 00:17:35.798 "data_size": 63488 00:17:35.798 }, 00:17:35.798 { 00:17:35.798 "name": "BaseBdev3", 00:17:35.798 "uuid": "a55ef06b-bbc8-4a52-b6f0-5985200d06aa", 00:17:35.798 "is_configured": true, 00:17:35.798 "data_offset": 2048, 00:17:35.798 "data_size": 63488 00:17:35.798 }, 00:17:35.798 { 00:17:35.798 "name": "BaseBdev4", 00:17:35.798 "uuid": "f3380e2e-2092-4c84-ae0a-0ebb81f9fa1f", 00:17:35.798 "is_configured": true, 00:17:35.798 "data_offset": 2048, 00:17:35.798 "data_size": 63488 00:17:35.798 } 00:17:35.798 ] 00:17:35.798 }' 00:17:35.798 16:33:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.798 16:33:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.058 16:33:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.058 16:33:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:36.058 16:33:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.058 16:33:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.058 16:33:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.058 16:33:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:36.058 16:33:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:36.058 16:33:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.058 16:33:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.058 [2024-12-06 16:33:17.886540] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:36.058 16:33:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.058 16:33:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:36.058 16:33:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:36.058 16:33:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:36.058 16:33:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:36.058 16:33:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:36.058 16:33:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:36.058 16:33:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.058 16:33:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.058 16:33:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.058 16:33:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.318 16:33:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.318 16:33:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.318 16:33:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.318 16:33:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:36.318 16:33:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.318 16:33:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.318 "name": "Existed_Raid", 00:17:36.318 "uuid": "3ac6597a-28bc-4bb9-89eb-504c4dd3e935", 00:17:36.318 "strip_size_kb": 64, 00:17:36.318 "state": "configuring", 00:17:36.318 "raid_level": "raid5f", 00:17:36.318 "superblock": true, 00:17:36.318 "num_base_bdevs": 4, 00:17:36.318 "num_base_bdevs_discovered": 2, 00:17:36.318 "num_base_bdevs_operational": 4, 00:17:36.318 "base_bdevs_list": [ 00:17:36.318 { 00:17:36.318 "name": "BaseBdev1", 00:17:36.318 "uuid": "17634edf-f310-4b7a-9a84-e6a4f504eddd", 00:17:36.318 "is_configured": true, 00:17:36.318 "data_offset": 2048, 00:17:36.318 "data_size": 63488 00:17:36.318 }, 00:17:36.318 { 00:17:36.318 "name": null, 00:17:36.318 "uuid": "5ddbe8bc-c58a-431f-bde5-a7f40563cea6", 00:17:36.318 "is_configured": false, 00:17:36.318 "data_offset": 0, 00:17:36.318 "data_size": 63488 00:17:36.318 }, 00:17:36.318 { 00:17:36.318 "name": null, 00:17:36.318 "uuid": "a55ef06b-bbc8-4a52-b6f0-5985200d06aa", 00:17:36.318 "is_configured": false, 00:17:36.318 "data_offset": 0, 00:17:36.318 "data_size": 63488 00:17:36.318 }, 00:17:36.318 { 00:17:36.318 "name": "BaseBdev4", 00:17:36.318 "uuid": "f3380e2e-2092-4c84-ae0a-0ebb81f9fa1f", 00:17:36.318 "is_configured": true, 00:17:36.318 "data_offset": 2048, 00:17:36.318 "data_size": 63488 00:17:36.318 } 00:17:36.318 ] 00:17:36.318 }' 00:17:36.318 16:33:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.318 16:33:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.578 16:33:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.578 16:33:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.578 16:33:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.578 16:33:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:36.578 16:33:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.578 16:33:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:36.578 16:33:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:36.578 16:33:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.578 16:33:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.578 [2024-12-06 16:33:18.397690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:36.578 16:33:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.578 16:33:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:36.578 16:33:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:36.578 16:33:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:36.578 16:33:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:36.578 16:33:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:36.578 16:33:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:36.578 16:33:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.578 16:33:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.578 16:33:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.578 16:33:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.578 16:33:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.578 16:33:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:36.578 16:33:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.578 16:33:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.838 16:33:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.838 16:33:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.838 "name": "Existed_Raid", 00:17:36.838 "uuid": "3ac6597a-28bc-4bb9-89eb-504c4dd3e935", 00:17:36.838 "strip_size_kb": 64, 00:17:36.838 "state": "configuring", 00:17:36.838 "raid_level": "raid5f", 00:17:36.838 "superblock": true, 00:17:36.838 "num_base_bdevs": 4, 00:17:36.838 "num_base_bdevs_discovered": 3, 00:17:36.838 "num_base_bdevs_operational": 4, 00:17:36.838 "base_bdevs_list": [ 00:17:36.838 { 00:17:36.838 "name": "BaseBdev1", 00:17:36.838 "uuid": "17634edf-f310-4b7a-9a84-e6a4f504eddd", 00:17:36.838 "is_configured": true, 00:17:36.838 "data_offset": 2048, 00:17:36.838 "data_size": 63488 00:17:36.838 }, 00:17:36.838 { 00:17:36.838 "name": null, 00:17:36.838 "uuid": "5ddbe8bc-c58a-431f-bde5-a7f40563cea6", 00:17:36.838 "is_configured": false, 00:17:36.838 "data_offset": 0, 00:17:36.838 "data_size": 63488 00:17:36.838 }, 00:17:36.838 { 00:17:36.838 "name": "BaseBdev3", 00:17:36.838 "uuid": "a55ef06b-bbc8-4a52-b6f0-5985200d06aa", 00:17:36.838 "is_configured": true, 00:17:36.838 "data_offset": 2048, 00:17:36.838 "data_size": 63488 00:17:36.838 }, 00:17:36.838 { 00:17:36.838 "name": "BaseBdev4", 00:17:36.838 "uuid": "f3380e2e-2092-4c84-ae0a-0ebb81f9fa1f", 00:17:36.838 "is_configured": true, 00:17:36.838 "data_offset": 2048, 00:17:36.838 "data_size": 63488 00:17:36.838 } 00:17:36.838 ] 00:17:36.838 }' 00:17:36.838 16:33:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.838 16:33:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.098 16:33:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:37.098 16:33:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.098 16:33:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.098 16:33:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.098 16:33:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.098 16:33:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:37.098 16:33:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:37.098 16:33:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.098 16:33:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.098 [2024-12-06 16:33:18.884892] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:37.098 16:33:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.098 16:33:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:37.098 16:33:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:37.098 16:33:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:37.098 16:33:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:37.098 16:33:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:37.098 16:33:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:37.098 16:33:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.098 16:33:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.098 16:33:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.098 16:33:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.098 16:33:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.098 16:33:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.098 16:33:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.098 16:33:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.098 16:33:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.358 16:33:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.358 "name": "Existed_Raid", 00:17:37.358 "uuid": "3ac6597a-28bc-4bb9-89eb-504c4dd3e935", 00:17:37.358 "strip_size_kb": 64, 00:17:37.358 "state": "configuring", 00:17:37.358 "raid_level": "raid5f", 00:17:37.358 "superblock": true, 00:17:37.358 "num_base_bdevs": 4, 00:17:37.358 "num_base_bdevs_discovered": 2, 00:17:37.358 "num_base_bdevs_operational": 4, 00:17:37.358 "base_bdevs_list": [ 00:17:37.358 { 00:17:37.358 "name": null, 00:17:37.358 "uuid": "17634edf-f310-4b7a-9a84-e6a4f504eddd", 00:17:37.358 "is_configured": false, 00:17:37.358 "data_offset": 0, 00:17:37.358 "data_size": 63488 00:17:37.358 }, 00:17:37.358 { 00:17:37.358 "name": null, 00:17:37.358 "uuid": "5ddbe8bc-c58a-431f-bde5-a7f40563cea6", 00:17:37.358 "is_configured": false, 00:17:37.358 "data_offset": 0, 00:17:37.358 "data_size": 63488 00:17:37.358 }, 00:17:37.358 { 00:17:37.358 "name": "BaseBdev3", 00:17:37.358 "uuid": "a55ef06b-bbc8-4a52-b6f0-5985200d06aa", 00:17:37.358 "is_configured": true, 00:17:37.358 "data_offset": 2048, 00:17:37.358 "data_size": 63488 00:17:37.358 }, 00:17:37.358 { 00:17:37.358 "name": "BaseBdev4", 00:17:37.358 "uuid": "f3380e2e-2092-4c84-ae0a-0ebb81f9fa1f", 00:17:37.358 "is_configured": true, 00:17:37.358 "data_offset": 2048, 00:17:37.358 "data_size": 63488 00:17:37.358 } 00:17:37.358 ] 00:17:37.358 }' 00:17:37.358 16:33:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.358 16:33:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.632 16:33:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.632 16:33:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:37.632 16:33:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.632 16:33:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.632 16:33:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.632 16:33:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:37.632 16:33:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:37.632 16:33:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.632 16:33:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.633 [2024-12-06 16:33:19.390584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:37.633 16:33:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.633 16:33:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:37.633 16:33:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:37.633 16:33:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:37.633 16:33:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:37.633 16:33:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:37.633 16:33:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:37.633 16:33:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.633 16:33:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.633 16:33:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.633 16:33:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.633 16:33:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.633 16:33:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.633 16:33:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.633 16:33:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.633 16:33:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.633 16:33:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.633 "name": "Existed_Raid", 00:17:37.633 "uuid": "3ac6597a-28bc-4bb9-89eb-504c4dd3e935", 00:17:37.633 "strip_size_kb": 64, 00:17:37.633 "state": "configuring", 00:17:37.633 "raid_level": "raid5f", 00:17:37.633 "superblock": true, 00:17:37.633 "num_base_bdevs": 4, 00:17:37.633 "num_base_bdevs_discovered": 3, 00:17:37.633 "num_base_bdevs_operational": 4, 00:17:37.633 "base_bdevs_list": [ 00:17:37.633 { 00:17:37.633 "name": null, 00:17:37.633 "uuid": "17634edf-f310-4b7a-9a84-e6a4f504eddd", 00:17:37.633 "is_configured": false, 00:17:37.633 "data_offset": 0, 00:17:37.633 "data_size": 63488 00:17:37.633 }, 00:17:37.633 { 00:17:37.633 "name": "BaseBdev2", 00:17:37.633 "uuid": "5ddbe8bc-c58a-431f-bde5-a7f40563cea6", 00:17:37.633 "is_configured": true, 00:17:37.634 "data_offset": 2048, 00:17:37.634 "data_size": 63488 00:17:37.634 }, 00:17:37.634 { 00:17:37.634 "name": "BaseBdev3", 00:17:37.634 "uuid": "a55ef06b-bbc8-4a52-b6f0-5985200d06aa", 00:17:37.634 "is_configured": true, 00:17:37.634 "data_offset": 2048, 00:17:37.634 "data_size": 63488 00:17:37.634 }, 00:17:37.634 { 00:17:37.634 "name": "BaseBdev4", 00:17:37.634 "uuid": "f3380e2e-2092-4c84-ae0a-0ebb81f9fa1f", 00:17:37.634 "is_configured": true, 00:17:37.634 "data_offset": 2048, 00:17:37.634 "data_size": 63488 00:17:37.634 } 00:17:37.634 ] 00:17:37.634 }' 00:17:37.634 16:33:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.634 16:33:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.209 16:33:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.209 16:33:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:38.209 16:33:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.209 16:33:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.209 16:33:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.209 16:33:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:38.209 16:33:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.209 16:33:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.209 16:33:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.209 16:33:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:38.209 16:33:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.209 16:33:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 17634edf-f310-4b7a-9a84-e6a4f504eddd 00:17:38.209 16:33:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.209 16:33:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.209 [2024-12-06 16:33:19.956695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:38.209 NewBaseBdev 00:17:38.209 [2024-12-06 16:33:19.956944] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:17:38.209 [2024-12-06 16:33:19.956962] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:38.209 [2024-12-06 16:33:19.957253] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:38.210 [2024-12-06 16:33:19.957700] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:17:38.210 [2024-12-06 16:33:19.957721] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:17:38.210 [2024-12-06 16:33:19.957819] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:38.210 16:33:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.210 16:33:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:38.210 16:33:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:17:38.210 16:33:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:38.210 16:33:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:38.210 16:33:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:38.210 16:33:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:38.210 16:33:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:38.210 16:33:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.210 16:33:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.210 16:33:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.210 16:33:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:38.210 16:33:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.210 16:33:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.210 [ 00:17:38.210 { 00:17:38.210 "name": "NewBaseBdev", 00:17:38.210 "aliases": [ 00:17:38.210 "17634edf-f310-4b7a-9a84-e6a4f504eddd" 00:17:38.210 ], 00:17:38.210 "product_name": "Malloc disk", 00:17:38.210 "block_size": 512, 00:17:38.210 "num_blocks": 65536, 00:17:38.210 "uuid": "17634edf-f310-4b7a-9a84-e6a4f504eddd", 00:17:38.210 "assigned_rate_limits": { 00:17:38.210 "rw_ios_per_sec": 0, 00:17:38.210 "rw_mbytes_per_sec": 0, 00:17:38.210 "r_mbytes_per_sec": 0, 00:17:38.210 "w_mbytes_per_sec": 0 00:17:38.210 }, 00:17:38.210 "claimed": true, 00:17:38.210 "claim_type": "exclusive_write", 00:17:38.210 "zoned": false, 00:17:38.210 "supported_io_types": { 00:17:38.210 "read": true, 00:17:38.210 "write": true, 00:17:38.210 "unmap": true, 00:17:38.210 "flush": true, 00:17:38.210 "reset": true, 00:17:38.210 "nvme_admin": false, 00:17:38.210 "nvme_io": false, 00:17:38.210 "nvme_io_md": false, 00:17:38.210 "write_zeroes": true, 00:17:38.210 "zcopy": true, 00:17:38.210 "get_zone_info": false, 00:17:38.210 "zone_management": false, 00:17:38.210 "zone_append": false, 00:17:38.210 "compare": false, 00:17:38.210 "compare_and_write": false, 00:17:38.210 "abort": true, 00:17:38.210 "seek_hole": false, 00:17:38.210 "seek_data": false, 00:17:38.210 "copy": true, 00:17:38.210 "nvme_iov_md": false 00:17:38.210 }, 00:17:38.210 "memory_domains": [ 00:17:38.210 { 00:17:38.210 "dma_device_id": "system", 00:17:38.210 "dma_device_type": 1 00:17:38.210 }, 00:17:38.210 { 00:17:38.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:38.210 "dma_device_type": 2 00:17:38.210 } 00:17:38.210 ], 00:17:38.210 "driver_specific": {} 00:17:38.210 } 00:17:38.210 ] 00:17:38.210 16:33:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.210 16:33:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:38.210 16:33:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:38.210 16:33:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:38.210 16:33:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:38.210 16:33:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:38.210 16:33:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:38.210 16:33:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:38.210 16:33:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.210 16:33:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.210 16:33:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.210 16:33:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.210 16:33:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.210 16:33:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:38.210 16:33:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.210 16:33:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.210 16:33:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.470 16:33:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.470 "name": "Existed_Raid", 00:17:38.470 "uuid": "3ac6597a-28bc-4bb9-89eb-504c4dd3e935", 00:17:38.470 "strip_size_kb": 64, 00:17:38.470 "state": "online", 00:17:38.470 "raid_level": "raid5f", 00:17:38.470 "superblock": true, 00:17:38.470 "num_base_bdevs": 4, 00:17:38.470 "num_base_bdevs_discovered": 4, 00:17:38.470 "num_base_bdevs_operational": 4, 00:17:38.470 "base_bdevs_list": [ 00:17:38.470 { 00:17:38.470 "name": "NewBaseBdev", 00:17:38.470 "uuid": "17634edf-f310-4b7a-9a84-e6a4f504eddd", 00:17:38.470 "is_configured": true, 00:17:38.470 "data_offset": 2048, 00:17:38.470 "data_size": 63488 00:17:38.470 }, 00:17:38.470 { 00:17:38.470 "name": "BaseBdev2", 00:17:38.470 "uuid": "5ddbe8bc-c58a-431f-bde5-a7f40563cea6", 00:17:38.470 "is_configured": true, 00:17:38.470 "data_offset": 2048, 00:17:38.470 "data_size": 63488 00:17:38.470 }, 00:17:38.470 { 00:17:38.470 "name": "BaseBdev3", 00:17:38.470 "uuid": "a55ef06b-bbc8-4a52-b6f0-5985200d06aa", 00:17:38.470 "is_configured": true, 00:17:38.470 "data_offset": 2048, 00:17:38.470 "data_size": 63488 00:17:38.470 }, 00:17:38.470 { 00:17:38.470 "name": "BaseBdev4", 00:17:38.470 "uuid": "f3380e2e-2092-4c84-ae0a-0ebb81f9fa1f", 00:17:38.470 "is_configured": true, 00:17:38.470 "data_offset": 2048, 00:17:38.470 "data_size": 63488 00:17:38.470 } 00:17:38.470 ] 00:17:38.470 }' 00:17:38.470 16:33:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.470 16:33:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.729 16:33:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:38.729 16:33:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:38.729 16:33:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:38.729 16:33:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:38.729 16:33:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:38.729 16:33:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:38.729 16:33:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:38.729 16:33:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:38.729 16:33:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.729 16:33:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.729 [2024-12-06 16:33:20.464163] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:38.729 16:33:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.729 16:33:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:38.729 "name": "Existed_Raid", 00:17:38.729 "aliases": [ 00:17:38.729 "3ac6597a-28bc-4bb9-89eb-504c4dd3e935" 00:17:38.729 ], 00:17:38.729 "product_name": "Raid Volume", 00:17:38.729 "block_size": 512, 00:17:38.729 "num_blocks": 190464, 00:17:38.729 "uuid": "3ac6597a-28bc-4bb9-89eb-504c4dd3e935", 00:17:38.729 "assigned_rate_limits": { 00:17:38.729 "rw_ios_per_sec": 0, 00:17:38.729 "rw_mbytes_per_sec": 0, 00:17:38.729 "r_mbytes_per_sec": 0, 00:17:38.729 "w_mbytes_per_sec": 0 00:17:38.729 }, 00:17:38.729 "claimed": false, 00:17:38.729 "zoned": false, 00:17:38.729 "supported_io_types": { 00:17:38.729 "read": true, 00:17:38.729 "write": true, 00:17:38.729 "unmap": false, 00:17:38.729 "flush": false, 00:17:38.729 "reset": true, 00:17:38.729 "nvme_admin": false, 00:17:38.729 "nvme_io": false, 00:17:38.729 "nvme_io_md": false, 00:17:38.729 "write_zeroes": true, 00:17:38.729 "zcopy": false, 00:17:38.729 "get_zone_info": false, 00:17:38.729 "zone_management": false, 00:17:38.729 "zone_append": false, 00:17:38.729 "compare": false, 00:17:38.729 "compare_and_write": false, 00:17:38.729 "abort": false, 00:17:38.729 "seek_hole": false, 00:17:38.729 "seek_data": false, 00:17:38.729 "copy": false, 00:17:38.729 "nvme_iov_md": false 00:17:38.729 }, 00:17:38.729 "driver_specific": { 00:17:38.729 "raid": { 00:17:38.729 "uuid": "3ac6597a-28bc-4bb9-89eb-504c4dd3e935", 00:17:38.729 "strip_size_kb": 64, 00:17:38.729 "state": "online", 00:17:38.729 "raid_level": "raid5f", 00:17:38.729 "superblock": true, 00:17:38.729 "num_base_bdevs": 4, 00:17:38.729 "num_base_bdevs_discovered": 4, 00:17:38.729 "num_base_bdevs_operational": 4, 00:17:38.729 "base_bdevs_list": [ 00:17:38.729 { 00:17:38.729 "name": "NewBaseBdev", 00:17:38.729 "uuid": "17634edf-f310-4b7a-9a84-e6a4f504eddd", 00:17:38.729 "is_configured": true, 00:17:38.729 "data_offset": 2048, 00:17:38.729 "data_size": 63488 00:17:38.729 }, 00:17:38.729 { 00:17:38.729 "name": "BaseBdev2", 00:17:38.729 "uuid": "5ddbe8bc-c58a-431f-bde5-a7f40563cea6", 00:17:38.729 "is_configured": true, 00:17:38.729 "data_offset": 2048, 00:17:38.729 "data_size": 63488 00:17:38.729 }, 00:17:38.729 { 00:17:38.729 "name": "BaseBdev3", 00:17:38.729 "uuid": "a55ef06b-bbc8-4a52-b6f0-5985200d06aa", 00:17:38.729 "is_configured": true, 00:17:38.729 "data_offset": 2048, 00:17:38.729 "data_size": 63488 00:17:38.729 }, 00:17:38.729 { 00:17:38.729 "name": "BaseBdev4", 00:17:38.729 "uuid": "f3380e2e-2092-4c84-ae0a-0ebb81f9fa1f", 00:17:38.729 "is_configured": true, 00:17:38.729 "data_offset": 2048, 00:17:38.729 "data_size": 63488 00:17:38.729 } 00:17:38.729 ] 00:17:38.729 } 00:17:38.729 } 00:17:38.729 }' 00:17:38.729 16:33:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:38.729 16:33:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:38.729 BaseBdev2 00:17:38.729 BaseBdev3 00:17:38.729 BaseBdev4' 00:17:38.729 16:33:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:38.988 16:33:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:38.988 16:33:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:38.988 16:33:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:38.988 16:33:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:38.988 16:33:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.988 16:33:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.988 16:33:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.988 16:33:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:38.988 16:33:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:38.988 16:33:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:38.988 16:33:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:38.988 16:33:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:38.988 16:33:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.988 16:33:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.988 16:33:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.988 16:33:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:38.988 16:33:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:38.988 16:33:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:38.988 16:33:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:38.988 16:33:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:38.988 16:33:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.988 16:33:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.988 16:33:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.988 16:33:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:38.988 16:33:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:38.988 16:33:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:38.988 16:33:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:38.988 16:33:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.988 16:33:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.988 16:33:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:38.988 16:33:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.989 16:33:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:38.989 16:33:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:38.989 16:33:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:38.989 16:33:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.989 16:33:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.989 [2024-12-06 16:33:20.787355] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:38.989 [2024-12-06 16:33:20.787384] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:38.989 [2024-12-06 16:33:20.787457] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:38.989 [2024-12-06 16:33:20.787711] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:38.989 [2024-12-06 16:33:20.787721] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:17:38.989 16:33:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.989 16:33:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 94390 00:17:38.989 16:33:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 94390 ']' 00:17:38.989 16:33:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 94390 00:17:38.989 16:33:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:38.989 16:33:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:38.989 16:33:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94390 00:17:39.247 16:33:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:39.247 16:33:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:39.247 killing process with pid 94390 00:17:39.247 16:33:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94390' 00:17:39.247 16:33:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 94390 00:17:39.247 [2024-12-06 16:33:20.835432] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:39.247 16:33:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 94390 00:17:39.247 [2024-12-06 16:33:20.876088] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:39.554 16:33:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:17:39.554 ************************************ 00:17:39.554 END TEST raid5f_state_function_test_sb 00:17:39.554 ************************************ 00:17:39.554 00:17:39.554 real 0m9.826s 00:17:39.554 user 0m16.820s 00:17:39.554 sys 0m2.077s 00:17:39.554 16:33:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:39.554 16:33:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.554 16:33:21 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:17:39.554 16:33:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:39.554 16:33:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:39.554 16:33:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:39.554 ************************************ 00:17:39.554 START TEST raid5f_superblock_test 00:17:39.554 ************************************ 00:17:39.554 16:33:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:17:39.554 16:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:17:39.554 16:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:17:39.554 16:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:39.554 16:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:39.554 16:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:39.554 16:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:39.554 16:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:39.554 16:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:39.554 16:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:39.554 16:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:39.554 16:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:39.554 16:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:39.554 16:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:39.554 16:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:17:39.554 16:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:17:39.554 16:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:17:39.554 16:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=95038 00:17:39.554 16:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:39.554 16:33:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 95038 00:17:39.554 16:33:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 95038 ']' 00:17:39.554 16:33:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.554 16:33:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:39.554 16:33:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:39.554 16:33:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:39.554 16:33:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.554 [2024-12-06 16:33:21.258627] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:17:39.554 [2024-12-06 16:33:21.258845] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95038 ] 00:17:39.813 [2024-12-06 16:33:21.408052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.813 [2024-12-06 16:33:21.433597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:39.813 [2024-12-06 16:33:21.476457] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:39.813 [2024-12-06 16:33:21.476574] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:40.382 16:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:40.382 16:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:17:40.382 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:40.382 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:40.382 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:40.382 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:40.382 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:40.382 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:40.382 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:40.382 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:40.382 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:17:40.382 16:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.382 16:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.382 malloc1 00:17:40.382 16:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.382 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:40.382 16:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.382 16:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.382 [2024-12-06 16:33:22.108240] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:40.382 [2024-12-06 16:33:22.108312] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:40.382 [2024-12-06 16:33:22.108338] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:40.382 [2024-12-06 16:33:22.108352] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:40.382 [2024-12-06 16:33:22.110519] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:40.382 [2024-12-06 16:33:22.110599] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:40.382 pt1 00:17:40.382 16:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.382 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:40.382 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:40.382 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:40.382 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:40.382 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:40.382 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:40.382 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:40.382 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:40.382 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:17:40.382 16:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.382 16:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.382 malloc2 00:17:40.382 16:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.382 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:40.382 16:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.382 16:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.382 [2024-12-06 16:33:22.136694] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:40.383 [2024-12-06 16:33:22.136784] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:40.383 [2024-12-06 16:33:22.136836] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:40.383 [2024-12-06 16:33:22.136868] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:40.383 [2024-12-06 16:33:22.138956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:40.383 [2024-12-06 16:33:22.139026] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:40.383 pt2 00:17:40.383 16:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.383 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:40.383 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:40.383 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:17:40.383 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:17:40.383 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:40.383 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:40.383 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:40.383 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:40.383 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:17:40.383 16:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.383 16:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.383 malloc3 00:17:40.383 16:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.383 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:40.383 16:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.383 16:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.383 [2024-12-06 16:33:22.169680] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:40.383 [2024-12-06 16:33:22.169771] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:40.383 [2024-12-06 16:33:22.169808] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:40.383 [2024-12-06 16:33:22.169839] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:40.383 [2024-12-06 16:33:22.172070] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:40.383 [2024-12-06 16:33:22.172148] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:40.383 pt3 00:17:40.383 16:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.383 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:40.383 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:40.383 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:17:40.383 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:17:40.383 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:17:40.383 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:40.383 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:40.383 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:40.383 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:17:40.383 16:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.383 16:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.383 malloc4 00:17:40.383 16:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.383 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:40.383 16:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.383 16:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.383 [2024-12-06 16:33:22.213402] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:40.383 [2024-12-06 16:33:22.213461] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:40.383 [2024-12-06 16:33:22.213482] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:40.383 [2024-12-06 16:33:22.213497] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:40.383 [2024-12-06 16:33:22.215929] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:40.383 [2024-12-06 16:33:22.215970] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:40.642 pt4 00:17:40.642 16:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.642 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:40.642 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:40.642 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:17:40.642 16:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.642 16:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.642 [2024-12-06 16:33:22.225434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:40.642 [2024-12-06 16:33:22.227337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:40.643 [2024-12-06 16:33:22.227425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:40.643 [2024-12-06 16:33:22.227480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:40.643 [2024-12-06 16:33:22.227661] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:17:40.643 [2024-12-06 16:33:22.227683] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:40.643 [2024-12-06 16:33:22.227974] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:40.643 [2024-12-06 16:33:22.228548] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:17:40.643 [2024-12-06 16:33:22.228563] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:17:40.643 [2024-12-06 16:33:22.228699] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:40.643 16:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.643 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:40.643 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:40.643 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:40.643 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:40.643 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:40.643 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:40.643 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.643 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.643 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.643 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.643 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.643 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.643 16:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.643 16:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.643 16:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.643 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.643 "name": "raid_bdev1", 00:17:40.643 "uuid": "49406e8f-6572-46da-8e6b-9cc3b744e65d", 00:17:40.643 "strip_size_kb": 64, 00:17:40.643 "state": "online", 00:17:40.643 "raid_level": "raid5f", 00:17:40.643 "superblock": true, 00:17:40.643 "num_base_bdevs": 4, 00:17:40.643 "num_base_bdevs_discovered": 4, 00:17:40.643 "num_base_bdevs_operational": 4, 00:17:40.643 "base_bdevs_list": [ 00:17:40.643 { 00:17:40.643 "name": "pt1", 00:17:40.643 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:40.643 "is_configured": true, 00:17:40.643 "data_offset": 2048, 00:17:40.643 "data_size": 63488 00:17:40.643 }, 00:17:40.643 { 00:17:40.643 "name": "pt2", 00:17:40.643 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:40.643 "is_configured": true, 00:17:40.643 "data_offset": 2048, 00:17:40.643 "data_size": 63488 00:17:40.643 }, 00:17:40.643 { 00:17:40.643 "name": "pt3", 00:17:40.643 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:40.643 "is_configured": true, 00:17:40.643 "data_offset": 2048, 00:17:40.643 "data_size": 63488 00:17:40.643 }, 00:17:40.643 { 00:17:40.643 "name": "pt4", 00:17:40.643 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:40.643 "is_configured": true, 00:17:40.643 "data_offset": 2048, 00:17:40.643 "data_size": 63488 00:17:40.643 } 00:17:40.643 ] 00:17:40.643 }' 00:17:40.643 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.643 16:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.902 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:40.902 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:40.902 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:40.902 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:40.902 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:40.902 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:40.902 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:40.902 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:40.902 16:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.902 16:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.902 [2024-12-06 16:33:22.666025] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:40.902 16:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.902 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:40.902 "name": "raid_bdev1", 00:17:40.902 "aliases": [ 00:17:40.902 "49406e8f-6572-46da-8e6b-9cc3b744e65d" 00:17:40.902 ], 00:17:40.902 "product_name": "Raid Volume", 00:17:40.902 "block_size": 512, 00:17:40.902 "num_blocks": 190464, 00:17:40.902 "uuid": "49406e8f-6572-46da-8e6b-9cc3b744e65d", 00:17:40.902 "assigned_rate_limits": { 00:17:40.902 "rw_ios_per_sec": 0, 00:17:40.902 "rw_mbytes_per_sec": 0, 00:17:40.902 "r_mbytes_per_sec": 0, 00:17:40.902 "w_mbytes_per_sec": 0 00:17:40.902 }, 00:17:40.902 "claimed": false, 00:17:40.902 "zoned": false, 00:17:40.902 "supported_io_types": { 00:17:40.902 "read": true, 00:17:40.902 "write": true, 00:17:40.902 "unmap": false, 00:17:40.902 "flush": false, 00:17:40.902 "reset": true, 00:17:40.902 "nvme_admin": false, 00:17:40.902 "nvme_io": false, 00:17:40.902 "nvme_io_md": false, 00:17:40.902 "write_zeroes": true, 00:17:40.902 "zcopy": false, 00:17:40.902 "get_zone_info": false, 00:17:40.902 "zone_management": false, 00:17:40.902 "zone_append": false, 00:17:40.902 "compare": false, 00:17:40.902 "compare_and_write": false, 00:17:40.902 "abort": false, 00:17:40.902 "seek_hole": false, 00:17:40.902 "seek_data": false, 00:17:40.902 "copy": false, 00:17:40.902 "nvme_iov_md": false 00:17:40.902 }, 00:17:40.902 "driver_specific": { 00:17:40.902 "raid": { 00:17:40.902 "uuid": "49406e8f-6572-46da-8e6b-9cc3b744e65d", 00:17:40.902 "strip_size_kb": 64, 00:17:40.902 "state": "online", 00:17:40.902 "raid_level": "raid5f", 00:17:40.902 "superblock": true, 00:17:40.902 "num_base_bdevs": 4, 00:17:40.902 "num_base_bdevs_discovered": 4, 00:17:40.902 "num_base_bdevs_operational": 4, 00:17:40.902 "base_bdevs_list": [ 00:17:40.902 { 00:17:40.902 "name": "pt1", 00:17:40.902 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:40.902 "is_configured": true, 00:17:40.902 "data_offset": 2048, 00:17:40.902 "data_size": 63488 00:17:40.902 }, 00:17:40.902 { 00:17:40.902 "name": "pt2", 00:17:40.902 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:40.902 "is_configured": true, 00:17:40.902 "data_offset": 2048, 00:17:40.902 "data_size": 63488 00:17:40.902 }, 00:17:40.902 { 00:17:40.903 "name": "pt3", 00:17:40.903 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:40.903 "is_configured": true, 00:17:40.903 "data_offset": 2048, 00:17:40.903 "data_size": 63488 00:17:40.903 }, 00:17:40.903 { 00:17:40.903 "name": "pt4", 00:17:40.903 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:40.903 "is_configured": true, 00:17:40.903 "data_offset": 2048, 00:17:40.903 "data_size": 63488 00:17:40.903 } 00:17:40.903 ] 00:17:40.903 } 00:17:40.903 } 00:17:40.903 }' 00:17:40.903 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:41.162 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:41.162 pt2 00:17:41.162 pt3 00:17:41.162 pt4' 00:17:41.162 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:41.162 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:41.162 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:41.162 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:41.162 16:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.162 16:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.162 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:41.162 16:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.162 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:41.162 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:41.162 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:41.162 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:41.163 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:41.163 16:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.163 16:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.163 16:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.163 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:41.163 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:41.163 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:41.163 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:41.163 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:41.163 16:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.163 16:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.163 16:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.163 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:41.163 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:41.163 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:41.163 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:41.163 16:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.163 16:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.163 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:41.163 16:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.163 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:41.163 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:41.163 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:41.163 16:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.163 16:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.163 16:33:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:41.163 [2024-12-06 16:33:22.981493] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:41.163 16:33:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.423 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=49406e8f-6572-46da-8e6b-9cc3b744e65d 00:17:41.423 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 49406e8f-6572-46da-8e6b-9cc3b744e65d ']' 00:17:41.423 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:41.423 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.423 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.423 [2024-12-06 16:33:23.033245] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:41.423 [2024-12-06 16:33:23.033293] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:41.423 [2024-12-06 16:33:23.033384] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:41.423 [2024-12-06 16:33:23.033482] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:41.423 [2024-12-06 16:33:23.033493] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:17:41.423 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.423 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.423 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.423 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.423 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:41.423 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.423 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:41.423 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:41.423 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:41.423 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:41.423 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.423 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.423 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.423 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:41.423 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:41.423 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.423 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.423 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.423 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:41.423 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:17:41.423 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.423 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.423 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.423 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:41.423 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:17:41.423 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.423 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.423 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.423 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:41.423 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:41.423 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.423 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.423 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.423 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:41.423 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:41.423 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:17:41.423 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:41.423 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:41.423 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:41.423 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:41.423 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:41.423 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:41.423 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.424 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.424 [2024-12-06 16:33:23.197025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:41.424 [2024-12-06 16:33:23.199067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:41.424 [2024-12-06 16:33:23.199177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:41.424 [2024-12-06 16:33:23.199243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:17:41.424 [2024-12-06 16:33:23.199323] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:41.424 [2024-12-06 16:33:23.199424] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:41.424 [2024-12-06 16:33:23.199470] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:41.424 [2024-12-06 16:33:23.199487] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:17:41.424 [2024-12-06 16:33:23.199501] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:41.424 [2024-12-06 16:33:23.199514] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:17:41.424 request: 00:17:41.424 { 00:17:41.424 "name": "raid_bdev1", 00:17:41.424 "raid_level": "raid5f", 00:17:41.424 "base_bdevs": [ 00:17:41.424 "malloc1", 00:17:41.424 "malloc2", 00:17:41.424 "malloc3", 00:17:41.424 "malloc4" 00:17:41.424 ], 00:17:41.424 "strip_size_kb": 64, 00:17:41.424 "superblock": false, 00:17:41.424 "method": "bdev_raid_create", 00:17:41.424 "req_id": 1 00:17:41.424 } 00:17:41.424 Got JSON-RPC error response 00:17:41.424 response: 00:17:41.424 { 00:17:41.424 "code": -17, 00:17:41.424 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:41.424 } 00:17:41.424 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:41.424 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:17:41.424 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:41.424 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:41.424 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:41.424 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.424 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.424 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.424 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:41.424 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.424 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:41.424 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:41.424 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:41.424 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.424 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.684 [2024-12-06 16:33:23.264817] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:41.684 [2024-12-06 16:33:23.264927] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.684 [2024-12-06 16:33:23.264972] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:41.684 [2024-12-06 16:33:23.265017] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.684 [2024-12-06 16:33:23.267432] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.684 [2024-12-06 16:33:23.267503] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:41.684 [2024-12-06 16:33:23.267641] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:41.684 [2024-12-06 16:33:23.267717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:41.684 pt1 00:17:41.684 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.684 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:17:41.684 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:41.684 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:41.684 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:41.684 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:41.684 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:41.684 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.684 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.684 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.684 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.684 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.684 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.684 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.684 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.684 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.684 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.684 "name": "raid_bdev1", 00:17:41.684 "uuid": "49406e8f-6572-46da-8e6b-9cc3b744e65d", 00:17:41.684 "strip_size_kb": 64, 00:17:41.684 "state": "configuring", 00:17:41.684 "raid_level": "raid5f", 00:17:41.684 "superblock": true, 00:17:41.684 "num_base_bdevs": 4, 00:17:41.684 "num_base_bdevs_discovered": 1, 00:17:41.684 "num_base_bdevs_operational": 4, 00:17:41.684 "base_bdevs_list": [ 00:17:41.684 { 00:17:41.684 "name": "pt1", 00:17:41.684 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:41.684 "is_configured": true, 00:17:41.684 "data_offset": 2048, 00:17:41.684 "data_size": 63488 00:17:41.684 }, 00:17:41.684 { 00:17:41.684 "name": null, 00:17:41.684 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:41.684 "is_configured": false, 00:17:41.684 "data_offset": 2048, 00:17:41.684 "data_size": 63488 00:17:41.684 }, 00:17:41.684 { 00:17:41.684 "name": null, 00:17:41.684 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:41.684 "is_configured": false, 00:17:41.684 "data_offset": 2048, 00:17:41.684 "data_size": 63488 00:17:41.684 }, 00:17:41.684 { 00:17:41.684 "name": null, 00:17:41.684 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:41.684 "is_configured": false, 00:17:41.684 "data_offset": 2048, 00:17:41.684 "data_size": 63488 00:17:41.684 } 00:17:41.684 ] 00:17:41.684 }' 00:17:41.684 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.684 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.944 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:17:41.944 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:41.944 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.944 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.944 [2024-12-06 16:33:23.708175] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:41.944 [2024-12-06 16:33:23.708245] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.944 [2024-12-06 16:33:23.708271] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:41.944 [2024-12-06 16:33:23.708280] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.944 [2024-12-06 16:33:23.708684] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.944 [2024-12-06 16:33:23.708707] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:41.944 [2024-12-06 16:33:23.708785] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:41.944 [2024-12-06 16:33:23.708807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:41.944 pt2 00:17:41.944 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.944 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:17:41.944 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.944 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.944 [2024-12-06 16:33:23.720163] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:41.944 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.944 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:17:41.944 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:41.944 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:41.944 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:41.944 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:41.944 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:41.944 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.944 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.944 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.944 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.944 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.944 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.944 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.944 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.944 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.944 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.944 "name": "raid_bdev1", 00:17:41.944 "uuid": "49406e8f-6572-46da-8e6b-9cc3b744e65d", 00:17:41.944 "strip_size_kb": 64, 00:17:41.944 "state": "configuring", 00:17:41.944 "raid_level": "raid5f", 00:17:41.944 "superblock": true, 00:17:41.944 "num_base_bdevs": 4, 00:17:41.944 "num_base_bdevs_discovered": 1, 00:17:41.944 "num_base_bdevs_operational": 4, 00:17:41.944 "base_bdevs_list": [ 00:17:41.944 { 00:17:41.944 "name": "pt1", 00:17:41.944 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:41.944 "is_configured": true, 00:17:41.944 "data_offset": 2048, 00:17:41.944 "data_size": 63488 00:17:41.944 }, 00:17:41.944 { 00:17:41.944 "name": null, 00:17:41.944 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:41.944 "is_configured": false, 00:17:41.944 "data_offset": 0, 00:17:41.944 "data_size": 63488 00:17:41.944 }, 00:17:41.944 { 00:17:41.944 "name": null, 00:17:41.944 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:41.944 "is_configured": false, 00:17:41.944 "data_offset": 2048, 00:17:41.944 "data_size": 63488 00:17:41.944 }, 00:17:41.944 { 00:17:41.944 "name": null, 00:17:41.944 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:41.944 "is_configured": false, 00:17:41.944 "data_offset": 2048, 00:17:41.944 "data_size": 63488 00:17:41.944 } 00:17:41.944 ] 00:17:41.944 }' 00:17:41.944 16:33:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.944 16:33:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.513 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:42.514 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:42.514 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:42.514 16:33:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.514 16:33:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.514 [2024-12-06 16:33:24.127559] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:42.514 [2024-12-06 16:33:24.127677] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.514 [2024-12-06 16:33:24.127714] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:42.514 [2024-12-06 16:33:24.127744] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.514 [2024-12-06 16:33:24.128243] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.514 [2024-12-06 16:33:24.128310] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:42.514 [2024-12-06 16:33:24.128422] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:42.514 [2024-12-06 16:33:24.128480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:42.514 pt2 00:17:42.514 16:33:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.514 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:42.514 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:42.514 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:42.514 16:33:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.514 16:33:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.514 [2024-12-06 16:33:24.139515] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:42.514 [2024-12-06 16:33:24.139599] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.514 [2024-12-06 16:33:24.139632] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:42.514 [2024-12-06 16:33:24.139661] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.514 [2024-12-06 16:33:24.140057] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.514 [2024-12-06 16:33:24.140119] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:42.514 [2024-12-06 16:33:24.140221] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:42.514 [2024-12-06 16:33:24.140287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:42.514 pt3 00:17:42.514 16:33:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.514 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:42.514 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:42.514 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:42.514 16:33:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.514 16:33:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.514 [2024-12-06 16:33:24.151484] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:42.514 [2024-12-06 16:33:24.151571] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.514 [2024-12-06 16:33:24.151619] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:42.514 [2024-12-06 16:33:24.151648] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.514 [2024-12-06 16:33:24.152011] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.514 [2024-12-06 16:33:24.152069] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:42.514 [2024-12-06 16:33:24.152137] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:42.514 [2024-12-06 16:33:24.152161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:42.514 [2024-12-06 16:33:24.152298] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:17:42.514 [2024-12-06 16:33:24.152314] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:42.514 [2024-12-06 16:33:24.152543] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:42.514 [2024-12-06 16:33:24.153027] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:17:42.514 [2024-12-06 16:33:24.153044] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:17:42.514 [2024-12-06 16:33:24.153150] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:42.514 pt4 00:17:42.514 16:33:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.514 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:42.514 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:42.514 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:42.514 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:42.514 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:42.514 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:42.514 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:42.514 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:42.514 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.514 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.514 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.514 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.514 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.514 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.514 16:33:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.514 16:33:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.514 16:33:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.514 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.514 "name": "raid_bdev1", 00:17:42.514 "uuid": "49406e8f-6572-46da-8e6b-9cc3b744e65d", 00:17:42.514 "strip_size_kb": 64, 00:17:42.514 "state": "online", 00:17:42.514 "raid_level": "raid5f", 00:17:42.514 "superblock": true, 00:17:42.514 "num_base_bdevs": 4, 00:17:42.514 "num_base_bdevs_discovered": 4, 00:17:42.514 "num_base_bdevs_operational": 4, 00:17:42.514 "base_bdevs_list": [ 00:17:42.514 { 00:17:42.514 "name": "pt1", 00:17:42.514 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:42.514 "is_configured": true, 00:17:42.515 "data_offset": 2048, 00:17:42.515 "data_size": 63488 00:17:42.515 }, 00:17:42.515 { 00:17:42.515 "name": "pt2", 00:17:42.515 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:42.515 "is_configured": true, 00:17:42.515 "data_offset": 2048, 00:17:42.515 "data_size": 63488 00:17:42.515 }, 00:17:42.515 { 00:17:42.515 "name": "pt3", 00:17:42.515 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:42.515 "is_configured": true, 00:17:42.515 "data_offset": 2048, 00:17:42.515 "data_size": 63488 00:17:42.515 }, 00:17:42.515 { 00:17:42.515 "name": "pt4", 00:17:42.515 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:42.515 "is_configured": true, 00:17:42.515 "data_offset": 2048, 00:17:42.515 "data_size": 63488 00:17:42.515 } 00:17:42.515 ] 00:17:42.515 }' 00:17:42.515 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.515 16:33:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.083 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:43.083 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:43.083 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:43.083 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:43.083 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:43.083 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:43.083 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:43.083 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:43.083 16:33:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.083 16:33:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.083 [2024-12-06 16:33:24.658881] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:43.083 16:33:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.083 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:43.083 "name": "raid_bdev1", 00:17:43.083 "aliases": [ 00:17:43.083 "49406e8f-6572-46da-8e6b-9cc3b744e65d" 00:17:43.083 ], 00:17:43.083 "product_name": "Raid Volume", 00:17:43.083 "block_size": 512, 00:17:43.083 "num_blocks": 190464, 00:17:43.083 "uuid": "49406e8f-6572-46da-8e6b-9cc3b744e65d", 00:17:43.083 "assigned_rate_limits": { 00:17:43.083 "rw_ios_per_sec": 0, 00:17:43.083 "rw_mbytes_per_sec": 0, 00:17:43.083 "r_mbytes_per_sec": 0, 00:17:43.083 "w_mbytes_per_sec": 0 00:17:43.083 }, 00:17:43.083 "claimed": false, 00:17:43.083 "zoned": false, 00:17:43.083 "supported_io_types": { 00:17:43.083 "read": true, 00:17:43.083 "write": true, 00:17:43.083 "unmap": false, 00:17:43.083 "flush": false, 00:17:43.083 "reset": true, 00:17:43.083 "nvme_admin": false, 00:17:43.083 "nvme_io": false, 00:17:43.083 "nvme_io_md": false, 00:17:43.083 "write_zeroes": true, 00:17:43.083 "zcopy": false, 00:17:43.083 "get_zone_info": false, 00:17:43.083 "zone_management": false, 00:17:43.083 "zone_append": false, 00:17:43.083 "compare": false, 00:17:43.083 "compare_and_write": false, 00:17:43.083 "abort": false, 00:17:43.083 "seek_hole": false, 00:17:43.083 "seek_data": false, 00:17:43.083 "copy": false, 00:17:43.083 "nvme_iov_md": false 00:17:43.083 }, 00:17:43.083 "driver_specific": { 00:17:43.083 "raid": { 00:17:43.083 "uuid": "49406e8f-6572-46da-8e6b-9cc3b744e65d", 00:17:43.083 "strip_size_kb": 64, 00:17:43.083 "state": "online", 00:17:43.083 "raid_level": "raid5f", 00:17:43.083 "superblock": true, 00:17:43.083 "num_base_bdevs": 4, 00:17:43.083 "num_base_bdevs_discovered": 4, 00:17:43.083 "num_base_bdevs_operational": 4, 00:17:43.083 "base_bdevs_list": [ 00:17:43.083 { 00:17:43.083 "name": "pt1", 00:17:43.083 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:43.083 "is_configured": true, 00:17:43.083 "data_offset": 2048, 00:17:43.083 "data_size": 63488 00:17:43.083 }, 00:17:43.083 { 00:17:43.083 "name": "pt2", 00:17:43.083 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:43.083 "is_configured": true, 00:17:43.083 "data_offset": 2048, 00:17:43.083 "data_size": 63488 00:17:43.083 }, 00:17:43.083 { 00:17:43.083 "name": "pt3", 00:17:43.083 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:43.083 "is_configured": true, 00:17:43.083 "data_offset": 2048, 00:17:43.083 "data_size": 63488 00:17:43.083 }, 00:17:43.083 { 00:17:43.083 "name": "pt4", 00:17:43.083 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:43.083 "is_configured": true, 00:17:43.083 "data_offset": 2048, 00:17:43.083 "data_size": 63488 00:17:43.083 } 00:17:43.083 ] 00:17:43.083 } 00:17:43.083 } 00:17:43.083 }' 00:17:43.083 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:43.083 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:43.083 pt2 00:17:43.083 pt3 00:17:43.083 pt4' 00:17:43.083 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:43.083 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:43.083 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:43.083 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:43.083 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:43.083 16:33:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.083 16:33:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.083 16:33:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.083 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:43.083 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:43.083 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:43.083 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:43.083 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:43.083 16:33:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.083 16:33:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.083 16:33:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.083 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:43.083 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:43.083 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:43.083 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:43.083 16:33:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.083 16:33:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.083 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:43.083 16:33:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.343 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:43.343 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:43.343 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:43.343 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:43.343 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:43.343 16:33:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.343 16:33:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.343 16:33:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.343 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:43.343 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:43.343 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:43.343 16:33:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:43.343 16:33:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.343 16:33:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.343 [2024-12-06 16:33:24.986335] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:43.343 16:33:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.343 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 49406e8f-6572-46da-8e6b-9cc3b744e65d '!=' 49406e8f-6572-46da-8e6b-9cc3b744e65d ']' 00:17:43.343 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:17:43.343 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:43.343 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:43.343 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:43.343 16:33:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.343 16:33:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.343 [2024-12-06 16:33:25.034020] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:43.343 16:33:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.343 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:43.343 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:43.343 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:43.343 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:43.343 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:43.343 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:43.343 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.343 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.343 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.343 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.343 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.343 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.343 16:33:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.343 16:33:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.343 16:33:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.343 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.343 "name": "raid_bdev1", 00:17:43.343 "uuid": "49406e8f-6572-46da-8e6b-9cc3b744e65d", 00:17:43.343 "strip_size_kb": 64, 00:17:43.343 "state": "online", 00:17:43.343 "raid_level": "raid5f", 00:17:43.343 "superblock": true, 00:17:43.343 "num_base_bdevs": 4, 00:17:43.343 "num_base_bdevs_discovered": 3, 00:17:43.343 "num_base_bdevs_operational": 3, 00:17:43.343 "base_bdevs_list": [ 00:17:43.343 { 00:17:43.343 "name": null, 00:17:43.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.343 "is_configured": false, 00:17:43.343 "data_offset": 0, 00:17:43.343 "data_size": 63488 00:17:43.343 }, 00:17:43.343 { 00:17:43.343 "name": "pt2", 00:17:43.343 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:43.343 "is_configured": true, 00:17:43.343 "data_offset": 2048, 00:17:43.343 "data_size": 63488 00:17:43.343 }, 00:17:43.343 { 00:17:43.343 "name": "pt3", 00:17:43.343 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:43.343 "is_configured": true, 00:17:43.343 "data_offset": 2048, 00:17:43.343 "data_size": 63488 00:17:43.343 }, 00:17:43.343 { 00:17:43.343 "name": "pt4", 00:17:43.343 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:43.343 "is_configured": true, 00:17:43.343 "data_offset": 2048, 00:17:43.343 "data_size": 63488 00:17:43.343 } 00:17:43.343 ] 00:17:43.343 }' 00:17:43.343 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.343 16:33:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.912 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:43.912 16:33:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.912 16:33:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.912 [2024-12-06 16:33:25.481316] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:43.912 [2024-12-06 16:33:25.481406] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:43.912 [2024-12-06 16:33:25.481514] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:43.912 [2024-12-06 16:33:25.481620] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:43.912 [2024-12-06 16:33:25.481670] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:17:43.912 16:33:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.912 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.912 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:43.912 16:33:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.912 16:33:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.912 16:33:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.912 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:43.912 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:43.912 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:43.912 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:43.912 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:43.912 16:33:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.912 16:33:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.912 16:33:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.912 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:43.912 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:43.912 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:17:43.912 16:33:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.912 16:33:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.912 16:33:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.912 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:43.912 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:43.912 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:17:43.912 16:33:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.912 16:33:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.912 16:33:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.912 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:43.912 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:43.912 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:43.912 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:43.912 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:43.912 16:33:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.912 16:33:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.912 [2024-12-06 16:33:25.581137] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:43.912 [2024-12-06 16:33:25.581198] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:43.912 [2024-12-06 16:33:25.581227] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:17:43.913 [2024-12-06 16:33:25.581239] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:43.913 [2024-12-06 16:33:25.583477] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:43.913 [2024-12-06 16:33:25.583517] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:43.913 [2024-12-06 16:33:25.583599] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:43.913 [2024-12-06 16:33:25.583636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:43.913 pt2 00:17:43.913 16:33:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.913 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:43.913 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:43.913 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:43.913 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:43.913 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:43.913 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:43.913 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.913 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.913 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.913 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.913 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.913 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.913 16:33:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.913 16:33:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.913 16:33:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.913 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.913 "name": "raid_bdev1", 00:17:43.913 "uuid": "49406e8f-6572-46da-8e6b-9cc3b744e65d", 00:17:43.913 "strip_size_kb": 64, 00:17:43.913 "state": "configuring", 00:17:43.913 "raid_level": "raid5f", 00:17:43.913 "superblock": true, 00:17:43.913 "num_base_bdevs": 4, 00:17:43.913 "num_base_bdevs_discovered": 1, 00:17:43.913 "num_base_bdevs_operational": 3, 00:17:43.913 "base_bdevs_list": [ 00:17:43.913 { 00:17:43.913 "name": null, 00:17:43.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.913 "is_configured": false, 00:17:43.913 "data_offset": 2048, 00:17:43.913 "data_size": 63488 00:17:43.913 }, 00:17:43.913 { 00:17:43.913 "name": "pt2", 00:17:43.913 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:43.913 "is_configured": true, 00:17:43.913 "data_offset": 2048, 00:17:43.913 "data_size": 63488 00:17:43.913 }, 00:17:43.913 { 00:17:43.913 "name": null, 00:17:43.913 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:43.913 "is_configured": false, 00:17:43.913 "data_offset": 2048, 00:17:43.913 "data_size": 63488 00:17:43.913 }, 00:17:43.913 { 00:17:43.913 "name": null, 00:17:43.913 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:43.913 "is_configured": false, 00:17:43.913 "data_offset": 2048, 00:17:43.913 "data_size": 63488 00:17:43.913 } 00:17:43.913 ] 00:17:43.913 }' 00:17:43.913 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.913 16:33:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.173 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:44.173 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:44.173 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:44.173 16:33:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.173 16:33:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.173 [2024-12-06 16:33:25.984438] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:44.173 [2024-12-06 16:33:25.984526] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:44.173 [2024-12-06 16:33:25.984551] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:44.173 [2024-12-06 16:33:25.984565] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:44.173 [2024-12-06 16:33:25.984965] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:44.173 [2024-12-06 16:33:25.984984] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:44.173 [2024-12-06 16:33:25.985065] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:44.173 [2024-12-06 16:33:25.985089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:44.173 pt3 00:17:44.173 16:33:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.173 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:44.173 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:44.173 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:44.173 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:44.173 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:44.173 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:44.173 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.173 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.173 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.173 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.173 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.173 16:33:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.173 16:33:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.173 16:33:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.432 16:33:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.432 16:33:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.432 "name": "raid_bdev1", 00:17:44.432 "uuid": "49406e8f-6572-46da-8e6b-9cc3b744e65d", 00:17:44.432 "strip_size_kb": 64, 00:17:44.432 "state": "configuring", 00:17:44.432 "raid_level": "raid5f", 00:17:44.432 "superblock": true, 00:17:44.432 "num_base_bdevs": 4, 00:17:44.432 "num_base_bdevs_discovered": 2, 00:17:44.432 "num_base_bdevs_operational": 3, 00:17:44.432 "base_bdevs_list": [ 00:17:44.432 { 00:17:44.432 "name": null, 00:17:44.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.432 "is_configured": false, 00:17:44.432 "data_offset": 2048, 00:17:44.432 "data_size": 63488 00:17:44.432 }, 00:17:44.432 { 00:17:44.432 "name": "pt2", 00:17:44.432 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:44.432 "is_configured": true, 00:17:44.432 "data_offset": 2048, 00:17:44.432 "data_size": 63488 00:17:44.432 }, 00:17:44.432 { 00:17:44.432 "name": "pt3", 00:17:44.432 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:44.432 "is_configured": true, 00:17:44.432 "data_offset": 2048, 00:17:44.432 "data_size": 63488 00:17:44.432 }, 00:17:44.432 { 00:17:44.432 "name": null, 00:17:44.432 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:44.432 "is_configured": false, 00:17:44.432 "data_offset": 2048, 00:17:44.432 "data_size": 63488 00:17:44.432 } 00:17:44.432 ] 00:17:44.432 }' 00:17:44.432 16:33:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.432 16:33:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.691 16:33:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:44.691 16:33:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:44.691 16:33:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:17:44.691 16:33:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:44.691 16:33:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.691 16:33:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.691 [2024-12-06 16:33:26.423724] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:44.691 [2024-12-06 16:33:26.423787] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:44.691 [2024-12-06 16:33:26.423809] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:17:44.691 [2024-12-06 16:33:26.423820] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:44.691 [2024-12-06 16:33:26.424261] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:44.691 [2024-12-06 16:33:26.424284] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:44.691 [2024-12-06 16:33:26.424361] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:44.691 [2024-12-06 16:33:26.424385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:44.691 [2024-12-06 16:33:26.424483] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:17:44.691 [2024-12-06 16:33:26.424494] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:44.691 [2024-12-06 16:33:26.424739] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:44.691 [2024-12-06 16:33:26.425291] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:17:44.691 [2024-12-06 16:33:26.425325] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:17:44.691 [2024-12-06 16:33:26.425559] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:44.691 pt4 00:17:44.691 16:33:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.691 16:33:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:44.691 16:33:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:44.691 16:33:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:44.691 16:33:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:44.691 16:33:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:44.691 16:33:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:44.691 16:33:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.691 16:33:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.691 16:33:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.691 16:33:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.691 16:33:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.691 16:33:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.691 16:33:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.691 16:33:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.691 16:33:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.691 16:33:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.691 "name": "raid_bdev1", 00:17:44.691 "uuid": "49406e8f-6572-46da-8e6b-9cc3b744e65d", 00:17:44.691 "strip_size_kb": 64, 00:17:44.691 "state": "online", 00:17:44.691 "raid_level": "raid5f", 00:17:44.691 "superblock": true, 00:17:44.691 "num_base_bdevs": 4, 00:17:44.691 "num_base_bdevs_discovered": 3, 00:17:44.691 "num_base_bdevs_operational": 3, 00:17:44.691 "base_bdevs_list": [ 00:17:44.691 { 00:17:44.691 "name": null, 00:17:44.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.691 "is_configured": false, 00:17:44.691 "data_offset": 2048, 00:17:44.691 "data_size": 63488 00:17:44.691 }, 00:17:44.691 { 00:17:44.691 "name": "pt2", 00:17:44.691 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:44.691 "is_configured": true, 00:17:44.691 "data_offset": 2048, 00:17:44.691 "data_size": 63488 00:17:44.691 }, 00:17:44.691 { 00:17:44.691 "name": "pt3", 00:17:44.691 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:44.691 "is_configured": true, 00:17:44.691 "data_offset": 2048, 00:17:44.691 "data_size": 63488 00:17:44.691 }, 00:17:44.691 { 00:17:44.691 "name": "pt4", 00:17:44.691 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:44.691 "is_configured": true, 00:17:44.691 "data_offset": 2048, 00:17:44.691 "data_size": 63488 00:17:44.691 } 00:17:44.691 ] 00:17:44.691 }' 00:17:44.691 16:33:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.692 16:33:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.260 16:33:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:45.260 16:33:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.260 16:33:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.260 [2024-12-06 16:33:26.839057] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:45.260 [2024-12-06 16:33:26.839094] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:45.260 [2024-12-06 16:33:26.839172] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:45.260 [2024-12-06 16:33:26.839272] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:45.260 [2024-12-06 16:33:26.839297] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:17:45.260 16:33:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.260 16:33:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.260 16:33:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.260 16:33:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.260 16:33:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:45.260 16:33:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.260 16:33:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:45.260 16:33:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:45.260 16:33:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:17:45.260 16:33:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:17:45.260 16:33:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:17:45.260 16:33:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.260 16:33:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.260 16:33:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.260 16:33:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:45.260 16:33:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.260 16:33:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.260 [2024-12-06 16:33:26.910933] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:45.260 [2024-12-06 16:33:26.911004] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:45.260 [2024-12-06 16:33:26.911055] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:17:45.260 [2024-12-06 16:33:26.911065] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:45.260 [2024-12-06 16:33:26.913667] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:45.260 [2024-12-06 16:33:26.913706] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:45.260 [2024-12-06 16:33:26.913791] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:45.260 [2024-12-06 16:33:26.913839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:45.260 [2024-12-06 16:33:26.913965] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:45.260 [2024-12-06 16:33:26.913985] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:45.260 [2024-12-06 16:33:26.914009] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:17:45.260 [2024-12-06 16:33:26.914052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:45.260 [2024-12-06 16:33:26.914172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:45.260 pt1 00:17:45.260 16:33:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.260 16:33:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:17:45.260 16:33:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:45.260 16:33:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:45.260 16:33:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:45.260 16:33:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:45.260 16:33:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:45.260 16:33:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:45.260 16:33:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.260 16:33:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.260 16:33:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.260 16:33:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.260 16:33:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.260 16:33:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.260 16:33:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.260 16:33:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.261 16:33:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.261 16:33:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.261 "name": "raid_bdev1", 00:17:45.261 "uuid": "49406e8f-6572-46da-8e6b-9cc3b744e65d", 00:17:45.261 "strip_size_kb": 64, 00:17:45.261 "state": "configuring", 00:17:45.261 "raid_level": "raid5f", 00:17:45.261 "superblock": true, 00:17:45.261 "num_base_bdevs": 4, 00:17:45.261 "num_base_bdevs_discovered": 2, 00:17:45.261 "num_base_bdevs_operational": 3, 00:17:45.261 "base_bdevs_list": [ 00:17:45.261 { 00:17:45.261 "name": null, 00:17:45.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.261 "is_configured": false, 00:17:45.261 "data_offset": 2048, 00:17:45.261 "data_size": 63488 00:17:45.261 }, 00:17:45.261 { 00:17:45.261 "name": "pt2", 00:17:45.261 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:45.261 "is_configured": true, 00:17:45.261 "data_offset": 2048, 00:17:45.261 "data_size": 63488 00:17:45.261 }, 00:17:45.261 { 00:17:45.261 "name": "pt3", 00:17:45.261 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:45.261 "is_configured": true, 00:17:45.261 "data_offset": 2048, 00:17:45.261 "data_size": 63488 00:17:45.261 }, 00:17:45.261 { 00:17:45.261 "name": null, 00:17:45.261 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:45.261 "is_configured": false, 00:17:45.261 "data_offset": 2048, 00:17:45.261 "data_size": 63488 00:17:45.261 } 00:17:45.261 ] 00:17:45.261 }' 00:17:45.261 16:33:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.261 16:33:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.520 16:33:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:17:45.520 16:33:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.520 16:33:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:45.520 16:33:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.520 16:33:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.779 16:33:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:17:45.779 16:33:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:45.779 16:33:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.779 16:33:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.779 [2024-12-06 16:33:27.370170] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:45.779 [2024-12-06 16:33:27.370255] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:45.779 [2024-12-06 16:33:27.370282] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:17:45.779 [2024-12-06 16:33:27.370296] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:45.779 [2024-12-06 16:33:27.370715] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:45.779 [2024-12-06 16:33:27.370735] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:45.779 [2024-12-06 16:33:27.370804] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:45.779 [2024-12-06 16:33:27.370828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:45.779 [2024-12-06 16:33:27.370927] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:17:45.779 [2024-12-06 16:33:27.370938] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:45.779 [2024-12-06 16:33:27.371214] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:45.779 [2024-12-06 16:33:27.371851] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:17:45.779 [2024-12-06 16:33:27.371877] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:17:45.779 [2024-12-06 16:33:27.372125] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:45.779 pt4 00:17:45.779 16:33:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.779 16:33:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:45.779 16:33:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:45.779 16:33:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:45.779 16:33:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:45.779 16:33:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:45.779 16:33:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:45.779 16:33:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.779 16:33:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.779 16:33:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.779 16:33:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.779 16:33:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.779 16:33:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.779 16:33:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.779 16:33:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.779 16:33:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.779 16:33:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.779 "name": "raid_bdev1", 00:17:45.779 "uuid": "49406e8f-6572-46da-8e6b-9cc3b744e65d", 00:17:45.779 "strip_size_kb": 64, 00:17:45.779 "state": "online", 00:17:45.779 "raid_level": "raid5f", 00:17:45.779 "superblock": true, 00:17:45.779 "num_base_bdevs": 4, 00:17:45.779 "num_base_bdevs_discovered": 3, 00:17:45.779 "num_base_bdevs_operational": 3, 00:17:45.779 "base_bdevs_list": [ 00:17:45.779 { 00:17:45.779 "name": null, 00:17:45.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.779 "is_configured": false, 00:17:45.779 "data_offset": 2048, 00:17:45.779 "data_size": 63488 00:17:45.779 }, 00:17:45.779 { 00:17:45.779 "name": "pt2", 00:17:45.779 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:45.779 "is_configured": true, 00:17:45.779 "data_offset": 2048, 00:17:45.779 "data_size": 63488 00:17:45.779 }, 00:17:45.779 { 00:17:45.779 "name": "pt3", 00:17:45.779 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:45.779 "is_configured": true, 00:17:45.779 "data_offset": 2048, 00:17:45.779 "data_size": 63488 00:17:45.779 }, 00:17:45.779 { 00:17:45.779 "name": "pt4", 00:17:45.779 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:45.779 "is_configured": true, 00:17:45.779 "data_offset": 2048, 00:17:45.779 "data_size": 63488 00:17:45.779 } 00:17:45.779 ] 00:17:45.779 }' 00:17:45.779 16:33:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.779 16:33:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.039 16:33:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:46.039 16:33:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:46.039 16:33:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.039 16:33:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.299 16:33:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.299 16:33:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:46.299 16:33:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:46.299 16:33:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:46.299 16:33:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.299 16:33:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.299 [2024-12-06 16:33:27.913513] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:46.299 16:33:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.299 16:33:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 49406e8f-6572-46da-8e6b-9cc3b744e65d '!=' 49406e8f-6572-46da-8e6b-9cc3b744e65d ']' 00:17:46.299 16:33:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 95038 00:17:46.299 16:33:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 95038 ']' 00:17:46.299 16:33:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 95038 00:17:46.299 16:33:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:17:46.299 16:33:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:46.299 16:33:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95038 00:17:46.299 killing process with pid 95038 00:17:46.299 16:33:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:46.299 16:33:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:46.299 16:33:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95038' 00:17:46.299 16:33:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 95038 00:17:46.299 [2024-12-06 16:33:27.979645] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:46.299 [2024-12-06 16:33:27.979740] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:46.299 16:33:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 95038 00:17:46.299 [2024-12-06 16:33:27.979830] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:46.299 [2024-12-06 16:33:27.979841] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:17:46.299 [2024-12-06 16:33:28.023884] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:46.558 16:33:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:17:46.558 00:17:46.558 real 0m7.073s 00:17:46.558 user 0m11.875s 00:17:46.558 sys 0m1.569s 00:17:46.558 16:33:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:46.558 16:33:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.558 ************************************ 00:17:46.558 END TEST raid5f_superblock_test 00:17:46.558 ************************************ 00:17:46.558 16:33:28 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:17:46.558 16:33:28 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:17:46.558 16:33:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:46.558 16:33:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:46.558 16:33:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:46.558 ************************************ 00:17:46.558 START TEST raid5f_rebuild_test 00:17:46.558 ************************************ 00:17:46.558 16:33:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:17:46.558 16:33:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:46.558 16:33:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:46.558 16:33:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:46.558 16:33:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:46.558 16:33:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:46.558 16:33:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:46.558 16:33:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:46.558 16:33:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:46.558 16:33:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:46.558 16:33:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:46.558 16:33:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:46.558 16:33:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:46.558 16:33:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:46.558 16:33:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:46.558 16:33:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:46.559 16:33:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:46.559 16:33:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:46.559 16:33:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:46.559 16:33:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:46.559 16:33:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:46.559 16:33:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:46.559 16:33:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:46.559 16:33:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:46.559 16:33:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:46.559 16:33:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:46.559 16:33:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:46.559 16:33:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:46.559 16:33:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:46.559 16:33:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:46.559 16:33:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:46.559 16:33:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:46.559 16:33:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=95507 00:17:46.559 16:33:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 95507 00:17:46.559 16:33:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:46.559 16:33:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 95507 ']' 00:17:46.559 16:33:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.559 16:33:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:46.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.559 16:33:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.559 16:33:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:46.559 16:33:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.818 [2024-12-06 16:33:28.411712] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:17:46.818 [2024-12-06 16:33:28.411829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95507 ] 00:17:46.818 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:46.818 Zero copy mechanism will not be used. 00:17:46.818 [2024-12-06 16:33:28.584224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.818 [2024-12-06 16:33:28.609639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.818 [2024-12-06 16:33:28.653216] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:46.818 [2024-12-06 16:33:28.653256] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:47.755 16:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:47.755 16:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:17:47.755 16:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:47.755 16:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:47.755 16:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.755 16:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.755 BaseBdev1_malloc 00:17:47.755 16:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.755 16:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:47.755 16:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.755 16:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.755 [2024-12-06 16:33:29.281326] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:47.755 [2024-12-06 16:33:29.281382] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.755 [2024-12-06 16:33:29.281414] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:47.755 [2024-12-06 16:33:29.281426] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.755 [2024-12-06 16:33:29.283522] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.755 [2024-12-06 16:33:29.283559] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:47.755 BaseBdev1 00:17:47.755 16:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.755 16:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:47.755 16:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:47.755 16:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.755 16:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.755 BaseBdev2_malloc 00:17:47.755 16:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.755 16:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:47.755 16:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.755 16:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.755 [2024-12-06 16:33:29.309915] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:47.755 [2024-12-06 16:33:29.309967] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.755 [2024-12-06 16:33:29.309986] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:47.755 [2024-12-06 16:33:29.309993] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.755 [2024-12-06 16:33:29.312023] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.755 [2024-12-06 16:33:29.312057] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:47.755 BaseBdev2 00:17:47.755 16:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.755 16:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:47.755 16:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:47.755 16:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.755 16:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.755 BaseBdev3_malloc 00:17:47.755 16:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.755 16:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:47.755 16:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.755 16:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.755 [2024-12-06 16:33:29.338554] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:47.755 [2024-12-06 16:33:29.338600] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.755 [2024-12-06 16:33:29.338637] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:47.755 [2024-12-06 16:33:29.338645] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.755 [2024-12-06 16:33:29.340684] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.755 [2024-12-06 16:33:29.340719] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:47.755 BaseBdev3 00:17:47.755 16:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.755 16:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:47.755 16:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:47.755 16:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.755 16:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.755 BaseBdev4_malloc 00:17:47.755 16:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.755 16:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:47.755 16:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.755 16:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.755 [2024-12-06 16:33:29.375063] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:47.755 [2024-12-06 16:33:29.375116] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.755 [2024-12-06 16:33:29.375140] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:47.755 [2024-12-06 16:33:29.375148] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.755 [2024-12-06 16:33:29.377457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.755 [2024-12-06 16:33:29.377492] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:47.755 BaseBdev4 00:17:47.755 16:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.755 16:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:47.755 16:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.755 16:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.755 spare_malloc 00:17:47.755 16:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.755 16:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:47.755 16:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.755 16:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.755 spare_delay 00:17:47.755 16:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.755 16:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:47.755 16:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.755 16:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.755 [2024-12-06 16:33:29.415692] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:47.755 [2024-12-06 16:33:29.415757] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.755 [2024-12-06 16:33:29.415777] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:47.756 [2024-12-06 16:33:29.415787] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.756 [2024-12-06 16:33:29.418084] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.756 [2024-12-06 16:33:29.418123] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:47.756 spare 00:17:47.756 16:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.756 16:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:47.756 16:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.756 16:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.756 [2024-12-06 16:33:29.427719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:47.756 [2024-12-06 16:33:29.429557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:47.756 [2024-12-06 16:33:29.429626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:47.756 [2024-12-06 16:33:29.429667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:47.756 [2024-12-06 16:33:29.429750] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:17:47.756 [2024-12-06 16:33:29.429763] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:47.756 [2024-12-06 16:33:29.430015] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:47.756 [2024-12-06 16:33:29.430459] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:17:47.756 [2024-12-06 16:33:29.430481] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:17:47.756 [2024-12-06 16:33:29.430585] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:47.756 16:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.756 16:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:47.756 16:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:47.756 16:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:47.756 16:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:47.756 16:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:47.756 16:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:47.756 16:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.756 16:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.756 16:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.756 16:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.756 16:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.756 16:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.756 16:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.756 16:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.756 16:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.756 16:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.756 "name": "raid_bdev1", 00:17:47.756 "uuid": "0d770191-c412-4cda-ad9a-f66b4038720e", 00:17:47.756 "strip_size_kb": 64, 00:17:47.756 "state": "online", 00:17:47.756 "raid_level": "raid5f", 00:17:47.756 "superblock": false, 00:17:47.756 "num_base_bdevs": 4, 00:17:47.756 "num_base_bdevs_discovered": 4, 00:17:47.756 "num_base_bdevs_operational": 4, 00:17:47.756 "base_bdevs_list": [ 00:17:47.756 { 00:17:47.756 "name": "BaseBdev1", 00:17:47.756 "uuid": "f6c5520c-1c51-5024-a868-3bee57a72965", 00:17:47.756 "is_configured": true, 00:17:47.756 "data_offset": 0, 00:17:47.756 "data_size": 65536 00:17:47.756 }, 00:17:47.756 { 00:17:47.756 "name": "BaseBdev2", 00:17:47.756 "uuid": "9273495a-6850-5c61-b68d-cbcbc1ff3028", 00:17:47.756 "is_configured": true, 00:17:47.756 "data_offset": 0, 00:17:47.756 "data_size": 65536 00:17:47.756 }, 00:17:47.756 { 00:17:47.756 "name": "BaseBdev3", 00:17:47.756 "uuid": "1870d484-9c13-5902-9bff-c692e1779c80", 00:17:47.756 "is_configured": true, 00:17:47.756 "data_offset": 0, 00:17:47.756 "data_size": 65536 00:17:47.756 }, 00:17:47.756 { 00:17:47.756 "name": "BaseBdev4", 00:17:47.756 "uuid": "bfd67ea6-5320-5107-b3e2-540f27ea82b1", 00:17:47.756 "is_configured": true, 00:17:47.756 "data_offset": 0, 00:17:47.756 "data_size": 65536 00:17:47.756 } 00:17:47.756 ] 00:17:47.756 }' 00:17:47.756 16:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.756 16:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.324 16:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:48.324 16:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.324 16:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.324 16:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:48.324 [2024-12-06 16:33:29.915792] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:48.324 16:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.324 16:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:17:48.324 16:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.324 16:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.324 16:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.324 16:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:48.324 16:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.324 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:48.324 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:48.324 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:48.324 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:48.324 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:48.324 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:48.324 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:48.324 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:48.324 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:48.324 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:48.324 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:48.324 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:48.324 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:48.324 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:48.583 [2024-12-06 16:33:30.203178] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:48.583 /dev/nbd0 00:17:48.583 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:48.584 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:48.584 16:33:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:48.584 16:33:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:48.584 16:33:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:48.584 16:33:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:48.584 16:33:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:48.584 16:33:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:48.584 16:33:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:48.584 16:33:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:48.584 16:33:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:48.584 1+0 records in 00:17:48.584 1+0 records out 00:17:48.584 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355924 s, 11.5 MB/s 00:17:48.584 16:33:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:48.584 16:33:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:48.584 16:33:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:48.584 16:33:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:48.584 16:33:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:48.584 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:48.584 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:48.584 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:48.584 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:17:48.584 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:17:48.584 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:17:49.187 512+0 records in 00:17:49.187 512+0 records out 00:17:49.187 100663296 bytes (101 MB, 96 MiB) copied, 0.416855 s, 241 MB/s 00:17:49.187 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:49.187 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:49.187 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:49.187 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:49.187 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:49.187 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:49.187 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:49.187 [2024-12-06 16:33:30.882755] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:49.187 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:49.187 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:49.187 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:49.187 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:49.187 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:49.187 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:49.187 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:49.187 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:49.187 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:49.187 16:33:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.187 16:33:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.187 [2024-12-06 16:33:30.914767] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:49.187 16:33:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.187 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:49.187 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:49.187 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:49.188 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:49.188 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:49.188 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:49.188 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.188 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.188 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.188 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.188 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.188 16:33:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.188 16:33:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.188 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.188 16:33:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.188 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.188 "name": "raid_bdev1", 00:17:49.188 "uuid": "0d770191-c412-4cda-ad9a-f66b4038720e", 00:17:49.188 "strip_size_kb": 64, 00:17:49.188 "state": "online", 00:17:49.188 "raid_level": "raid5f", 00:17:49.188 "superblock": false, 00:17:49.188 "num_base_bdevs": 4, 00:17:49.188 "num_base_bdevs_discovered": 3, 00:17:49.188 "num_base_bdevs_operational": 3, 00:17:49.188 "base_bdevs_list": [ 00:17:49.188 { 00:17:49.188 "name": null, 00:17:49.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.188 "is_configured": false, 00:17:49.188 "data_offset": 0, 00:17:49.188 "data_size": 65536 00:17:49.188 }, 00:17:49.188 { 00:17:49.188 "name": "BaseBdev2", 00:17:49.188 "uuid": "9273495a-6850-5c61-b68d-cbcbc1ff3028", 00:17:49.188 "is_configured": true, 00:17:49.188 "data_offset": 0, 00:17:49.188 "data_size": 65536 00:17:49.188 }, 00:17:49.188 { 00:17:49.188 "name": "BaseBdev3", 00:17:49.188 "uuid": "1870d484-9c13-5902-9bff-c692e1779c80", 00:17:49.188 "is_configured": true, 00:17:49.188 "data_offset": 0, 00:17:49.188 "data_size": 65536 00:17:49.188 }, 00:17:49.188 { 00:17:49.188 "name": "BaseBdev4", 00:17:49.188 "uuid": "bfd67ea6-5320-5107-b3e2-540f27ea82b1", 00:17:49.188 "is_configured": true, 00:17:49.188 "data_offset": 0, 00:17:49.188 "data_size": 65536 00:17:49.188 } 00:17:49.188 ] 00:17:49.188 }' 00:17:49.188 16:33:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.188 16:33:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.761 16:33:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:49.761 16:33:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.761 16:33:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.761 [2024-12-06 16:33:31.410015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:49.761 [2024-12-06 16:33:31.414454] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b5b0 00:17:49.761 16:33:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.761 16:33:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:49.761 [2024-12-06 16:33:31.416811] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:50.701 16:33:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:50.701 16:33:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:50.701 16:33:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:50.701 16:33:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:50.701 16:33:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:50.701 16:33:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.701 16:33:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.701 16:33:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.701 16:33:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.701 16:33:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.701 16:33:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:50.701 "name": "raid_bdev1", 00:17:50.701 "uuid": "0d770191-c412-4cda-ad9a-f66b4038720e", 00:17:50.701 "strip_size_kb": 64, 00:17:50.701 "state": "online", 00:17:50.701 "raid_level": "raid5f", 00:17:50.701 "superblock": false, 00:17:50.701 "num_base_bdevs": 4, 00:17:50.701 "num_base_bdevs_discovered": 4, 00:17:50.701 "num_base_bdevs_operational": 4, 00:17:50.701 "process": { 00:17:50.701 "type": "rebuild", 00:17:50.701 "target": "spare", 00:17:50.701 "progress": { 00:17:50.701 "blocks": 19200, 00:17:50.701 "percent": 9 00:17:50.701 } 00:17:50.701 }, 00:17:50.701 "base_bdevs_list": [ 00:17:50.701 { 00:17:50.701 "name": "spare", 00:17:50.701 "uuid": "e845ec7c-6919-50b5-904d-7f5125f307c3", 00:17:50.701 "is_configured": true, 00:17:50.701 "data_offset": 0, 00:17:50.701 "data_size": 65536 00:17:50.701 }, 00:17:50.701 { 00:17:50.701 "name": "BaseBdev2", 00:17:50.701 "uuid": "9273495a-6850-5c61-b68d-cbcbc1ff3028", 00:17:50.701 "is_configured": true, 00:17:50.701 "data_offset": 0, 00:17:50.701 "data_size": 65536 00:17:50.701 }, 00:17:50.701 { 00:17:50.701 "name": "BaseBdev3", 00:17:50.701 "uuid": "1870d484-9c13-5902-9bff-c692e1779c80", 00:17:50.701 "is_configured": true, 00:17:50.701 "data_offset": 0, 00:17:50.701 "data_size": 65536 00:17:50.701 }, 00:17:50.701 { 00:17:50.701 "name": "BaseBdev4", 00:17:50.701 "uuid": "bfd67ea6-5320-5107-b3e2-540f27ea82b1", 00:17:50.701 "is_configured": true, 00:17:50.701 "data_offset": 0, 00:17:50.701 "data_size": 65536 00:17:50.701 } 00:17:50.701 ] 00:17:50.701 }' 00:17:50.701 16:33:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:50.701 16:33:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:50.701 16:33:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:50.961 16:33:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:50.961 16:33:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:50.961 16:33:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.961 16:33:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.961 [2024-12-06 16:33:32.581347] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:50.961 [2024-12-06 16:33:32.623393] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:50.961 [2024-12-06 16:33:32.623459] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:50.961 [2024-12-06 16:33:32.623495] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:50.961 [2024-12-06 16:33:32.623511] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:50.961 16:33:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.961 16:33:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:50.961 16:33:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:50.961 16:33:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:50.961 16:33:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:50.961 16:33:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:50.961 16:33:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:50.961 16:33:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.961 16:33:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.961 16:33:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.961 16:33:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.962 16:33:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.962 16:33:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.962 16:33:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.962 16:33:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.962 16:33:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.962 16:33:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.962 "name": "raid_bdev1", 00:17:50.962 "uuid": "0d770191-c412-4cda-ad9a-f66b4038720e", 00:17:50.962 "strip_size_kb": 64, 00:17:50.962 "state": "online", 00:17:50.962 "raid_level": "raid5f", 00:17:50.962 "superblock": false, 00:17:50.962 "num_base_bdevs": 4, 00:17:50.962 "num_base_bdevs_discovered": 3, 00:17:50.962 "num_base_bdevs_operational": 3, 00:17:50.962 "base_bdevs_list": [ 00:17:50.962 { 00:17:50.962 "name": null, 00:17:50.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.962 "is_configured": false, 00:17:50.962 "data_offset": 0, 00:17:50.962 "data_size": 65536 00:17:50.962 }, 00:17:50.962 { 00:17:50.962 "name": "BaseBdev2", 00:17:50.962 "uuid": "9273495a-6850-5c61-b68d-cbcbc1ff3028", 00:17:50.962 "is_configured": true, 00:17:50.962 "data_offset": 0, 00:17:50.962 "data_size": 65536 00:17:50.962 }, 00:17:50.962 { 00:17:50.962 "name": "BaseBdev3", 00:17:50.962 "uuid": "1870d484-9c13-5902-9bff-c692e1779c80", 00:17:50.962 "is_configured": true, 00:17:50.962 "data_offset": 0, 00:17:50.962 "data_size": 65536 00:17:50.962 }, 00:17:50.962 { 00:17:50.962 "name": "BaseBdev4", 00:17:50.962 "uuid": "bfd67ea6-5320-5107-b3e2-540f27ea82b1", 00:17:50.962 "is_configured": true, 00:17:50.962 "data_offset": 0, 00:17:50.962 "data_size": 65536 00:17:50.962 } 00:17:50.962 ] 00:17:50.962 }' 00:17:50.962 16:33:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.962 16:33:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.221 16:33:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:51.221 16:33:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:51.221 16:33:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:51.222 16:33:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:51.222 16:33:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:51.222 16:33:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.222 16:33:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.222 16:33:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.222 16:33:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.481 16:33:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.481 16:33:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:51.481 "name": "raid_bdev1", 00:17:51.481 "uuid": "0d770191-c412-4cda-ad9a-f66b4038720e", 00:17:51.481 "strip_size_kb": 64, 00:17:51.481 "state": "online", 00:17:51.481 "raid_level": "raid5f", 00:17:51.481 "superblock": false, 00:17:51.481 "num_base_bdevs": 4, 00:17:51.481 "num_base_bdevs_discovered": 3, 00:17:51.481 "num_base_bdevs_operational": 3, 00:17:51.481 "base_bdevs_list": [ 00:17:51.481 { 00:17:51.481 "name": null, 00:17:51.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.481 "is_configured": false, 00:17:51.481 "data_offset": 0, 00:17:51.481 "data_size": 65536 00:17:51.481 }, 00:17:51.481 { 00:17:51.481 "name": "BaseBdev2", 00:17:51.481 "uuid": "9273495a-6850-5c61-b68d-cbcbc1ff3028", 00:17:51.481 "is_configured": true, 00:17:51.481 "data_offset": 0, 00:17:51.481 "data_size": 65536 00:17:51.481 }, 00:17:51.481 { 00:17:51.481 "name": "BaseBdev3", 00:17:51.481 "uuid": "1870d484-9c13-5902-9bff-c692e1779c80", 00:17:51.481 "is_configured": true, 00:17:51.481 "data_offset": 0, 00:17:51.481 "data_size": 65536 00:17:51.481 }, 00:17:51.481 { 00:17:51.481 "name": "BaseBdev4", 00:17:51.481 "uuid": "bfd67ea6-5320-5107-b3e2-540f27ea82b1", 00:17:51.481 "is_configured": true, 00:17:51.481 "data_offset": 0, 00:17:51.481 "data_size": 65536 00:17:51.481 } 00:17:51.481 ] 00:17:51.481 }' 00:17:51.481 16:33:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:51.481 16:33:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:51.481 16:33:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:51.481 16:33:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:51.481 16:33:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:51.481 16:33:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.481 16:33:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.481 [2024-12-06 16:33:33.172462] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:51.481 [2024-12-06 16:33:33.176554] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:17:51.481 16:33:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.481 16:33:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:51.481 [2024-12-06 16:33:33.178724] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:52.420 16:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:52.420 16:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:52.420 16:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:52.420 16:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:52.420 16:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:52.420 16:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.420 16:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.420 16:33:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.420 16:33:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.420 16:33:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.420 16:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:52.420 "name": "raid_bdev1", 00:17:52.420 "uuid": "0d770191-c412-4cda-ad9a-f66b4038720e", 00:17:52.420 "strip_size_kb": 64, 00:17:52.420 "state": "online", 00:17:52.420 "raid_level": "raid5f", 00:17:52.420 "superblock": false, 00:17:52.420 "num_base_bdevs": 4, 00:17:52.420 "num_base_bdevs_discovered": 4, 00:17:52.420 "num_base_bdevs_operational": 4, 00:17:52.420 "process": { 00:17:52.420 "type": "rebuild", 00:17:52.420 "target": "spare", 00:17:52.420 "progress": { 00:17:52.420 "blocks": 19200, 00:17:52.420 "percent": 9 00:17:52.420 } 00:17:52.420 }, 00:17:52.420 "base_bdevs_list": [ 00:17:52.420 { 00:17:52.420 "name": "spare", 00:17:52.420 "uuid": "e845ec7c-6919-50b5-904d-7f5125f307c3", 00:17:52.420 "is_configured": true, 00:17:52.420 "data_offset": 0, 00:17:52.420 "data_size": 65536 00:17:52.420 }, 00:17:52.420 { 00:17:52.420 "name": "BaseBdev2", 00:17:52.420 "uuid": "9273495a-6850-5c61-b68d-cbcbc1ff3028", 00:17:52.420 "is_configured": true, 00:17:52.420 "data_offset": 0, 00:17:52.420 "data_size": 65536 00:17:52.420 }, 00:17:52.420 { 00:17:52.420 "name": "BaseBdev3", 00:17:52.420 "uuid": "1870d484-9c13-5902-9bff-c692e1779c80", 00:17:52.420 "is_configured": true, 00:17:52.420 "data_offset": 0, 00:17:52.420 "data_size": 65536 00:17:52.420 }, 00:17:52.420 { 00:17:52.420 "name": "BaseBdev4", 00:17:52.420 "uuid": "bfd67ea6-5320-5107-b3e2-540f27ea82b1", 00:17:52.420 "is_configured": true, 00:17:52.420 "data_offset": 0, 00:17:52.420 "data_size": 65536 00:17:52.420 } 00:17:52.420 ] 00:17:52.420 }' 00:17:52.420 16:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:52.680 16:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:52.680 16:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:52.680 16:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:52.680 16:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:52.680 16:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:52.680 16:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:52.680 16:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=524 00:17:52.680 16:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:52.680 16:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:52.680 16:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:52.680 16:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:52.680 16:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:52.680 16:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:52.680 16:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.680 16:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.680 16:33:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.680 16:33:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.680 16:33:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.680 16:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:52.680 "name": "raid_bdev1", 00:17:52.680 "uuid": "0d770191-c412-4cda-ad9a-f66b4038720e", 00:17:52.680 "strip_size_kb": 64, 00:17:52.680 "state": "online", 00:17:52.680 "raid_level": "raid5f", 00:17:52.680 "superblock": false, 00:17:52.680 "num_base_bdevs": 4, 00:17:52.680 "num_base_bdevs_discovered": 4, 00:17:52.680 "num_base_bdevs_operational": 4, 00:17:52.680 "process": { 00:17:52.680 "type": "rebuild", 00:17:52.680 "target": "spare", 00:17:52.681 "progress": { 00:17:52.681 "blocks": 21120, 00:17:52.681 "percent": 10 00:17:52.681 } 00:17:52.681 }, 00:17:52.681 "base_bdevs_list": [ 00:17:52.681 { 00:17:52.681 "name": "spare", 00:17:52.681 "uuid": "e845ec7c-6919-50b5-904d-7f5125f307c3", 00:17:52.681 "is_configured": true, 00:17:52.681 "data_offset": 0, 00:17:52.681 "data_size": 65536 00:17:52.681 }, 00:17:52.681 { 00:17:52.681 "name": "BaseBdev2", 00:17:52.681 "uuid": "9273495a-6850-5c61-b68d-cbcbc1ff3028", 00:17:52.681 "is_configured": true, 00:17:52.681 "data_offset": 0, 00:17:52.681 "data_size": 65536 00:17:52.681 }, 00:17:52.681 { 00:17:52.681 "name": "BaseBdev3", 00:17:52.681 "uuid": "1870d484-9c13-5902-9bff-c692e1779c80", 00:17:52.681 "is_configured": true, 00:17:52.681 "data_offset": 0, 00:17:52.681 "data_size": 65536 00:17:52.681 }, 00:17:52.681 { 00:17:52.681 "name": "BaseBdev4", 00:17:52.681 "uuid": "bfd67ea6-5320-5107-b3e2-540f27ea82b1", 00:17:52.681 "is_configured": true, 00:17:52.681 "data_offset": 0, 00:17:52.681 "data_size": 65536 00:17:52.681 } 00:17:52.681 ] 00:17:52.681 }' 00:17:52.681 16:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:52.681 16:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:52.681 16:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:52.681 16:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:52.681 16:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:54.059 16:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:54.059 16:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:54.059 16:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:54.059 16:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:54.059 16:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:54.059 16:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:54.059 16:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.059 16:33:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.059 16:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.059 16:33:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.059 16:33:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.059 16:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:54.059 "name": "raid_bdev1", 00:17:54.059 "uuid": "0d770191-c412-4cda-ad9a-f66b4038720e", 00:17:54.059 "strip_size_kb": 64, 00:17:54.059 "state": "online", 00:17:54.059 "raid_level": "raid5f", 00:17:54.059 "superblock": false, 00:17:54.059 "num_base_bdevs": 4, 00:17:54.059 "num_base_bdevs_discovered": 4, 00:17:54.059 "num_base_bdevs_operational": 4, 00:17:54.059 "process": { 00:17:54.059 "type": "rebuild", 00:17:54.059 "target": "spare", 00:17:54.059 "progress": { 00:17:54.059 "blocks": 42240, 00:17:54.059 "percent": 21 00:17:54.059 } 00:17:54.059 }, 00:17:54.059 "base_bdevs_list": [ 00:17:54.059 { 00:17:54.059 "name": "spare", 00:17:54.059 "uuid": "e845ec7c-6919-50b5-904d-7f5125f307c3", 00:17:54.059 "is_configured": true, 00:17:54.059 "data_offset": 0, 00:17:54.059 "data_size": 65536 00:17:54.059 }, 00:17:54.059 { 00:17:54.059 "name": "BaseBdev2", 00:17:54.059 "uuid": "9273495a-6850-5c61-b68d-cbcbc1ff3028", 00:17:54.059 "is_configured": true, 00:17:54.059 "data_offset": 0, 00:17:54.059 "data_size": 65536 00:17:54.059 }, 00:17:54.059 { 00:17:54.059 "name": "BaseBdev3", 00:17:54.059 "uuid": "1870d484-9c13-5902-9bff-c692e1779c80", 00:17:54.059 "is_configured": true, 00:17:54.059 "data_offset": 0, 00:17:54.059 "data_size": 65536 00:17:54.059 }, 00:17:54.059 { 00:17:54.059 "name": "BaseBdev4", 00:17:54.059 "uuid": "bfd67ea6-5320-5107-b3e2-540f27ea82b1", 00:17:54.059 "is_configured": true, 00:17:54.059 "data_offset": 0, 00:17:54.059 "data_size": 65536 00:17:54.059 } 00:17:54.059 ] 00:17:54.059 }' 00:17:54.059 16:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:54.059 16:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:54.059 16:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:54.059 16:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:54.060 16:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:54.998 16:33:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:54.998 16:33:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:54.998 16:33:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:54.998 16:33:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:54.998 16:33:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:54.998 16:33:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:54.998 16:33:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.998 16:33:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.998 16:33:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.998 16:33:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.998 16:33:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.998 16:33:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:54.998 "name": "raid_bdev1", 00:17:54.998 "uuid": "0d770191-c412-4cda-ad9a-f66b4038720e", 00:17:54.998 "strip_size_kb": 64, 00:17:54.998 "state": "online", 00:17:54.998 "raid_level": "raid5f", 00:17:54.998 "superblock": false, 00:17:54.998 "num_base_bdevs": 4, 00:17:54.998 "num_base_bdevs_discovered": 4, 00:17:54.998 "num_base_bdevs_operational": 4, 00:17:54.998 "process": { 00:17:54.998 "type": "rebuild", 00:17:54.998 "target": "spare", 00:17:54.998 "progress": { 00:17:54.998 "blocks": 63360, 00:17:54.998 "percent": 32 00:17:54.998 } 00:17:54.998 }, 00:17:54.998 "base_bdevs_list": [ 00:17:54.998 { 00:17:54.998 "name": "spare", 00:17:54.998 "uuid": "e845ec7c-6919-50b5-904d-7f5125f307c3", 00:17:54.998 "is_configured": true, 00:17:54.998 "data_offset": 0, 00:17:54.998 "data_size": 65536 00:17:54.998 }, 00:17:54.998 { 00:17:54.998 "name": "BaseBdev2", 00:17:54.998 "uuid": "9273495a-6850-5c61-b68d-cbcbc1ff3028", 00:17:54.998 "is_configured": true, 00:17:54.998 "data_offset": 0, 00:17:54.998 "data_size": 65536 00:17:54.998 }, 00:17:54.998 { 00:17:54.998 "name": "BaseBdev3", 00:17:54.998 "uuid": "1870d484-9c13-5902-9bff-c692e1779c80", 00:17:54.998 "is_configured": true, 00:17:54.998 "data_offset": 0, 00:17:54.998 "data_size": 65536 00:17:54.998 }, 00:17:54.998 { 00:17:54.998 "name": "BaseBdev4", 00:17:54.998 "uuid": "bfd67ea6-5320-5107-b3e2-540f27ea82b1", 00:17:54.998 "is_configured": true, 00:17:54.998 "data_offset": 0, 00:17:54.998 "data_size": 65536 00:17:54.998 } 00:17:54.998 ] 00:17:54.998 }' 00:17:54.998 16:33:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:54.998 16:33:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:54.998 16:33:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:54.998 16:33:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:54.998 16:33:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:55.938 16:33:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:55.938 16:33:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:55.938 16:33:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:55.938 16:33:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:55.938 16:33:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:55.938 16:33:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:55.938 16:33:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.938 16:33:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.938 16:33:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.938 16:33:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.938 16:33:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.199 16:33:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:56.199 "name": "raid_bdev1", 00:17:56.199 "uuid": "0d770191-c412-4cda-ad9a-f66b4038720e", 00:17:56.199 "strip_size_kb": 64, 00:17:56.199 "state": "online", 00:17:56.199 "raid_level": "raid5f", 00:17:56.199 "superblock": false, 00:17:56.199 "num_base_bdevs": 4, 00:17:56.199 "num_base_bdevs_discovered": 4, 00:17:56.199 "num_base_bdevs_operational": 4, 00:17:56.199 "process": { 00:17:56.199 "type": "rebuild", 00:17:56.199 "target": "spare", 00:17:56.199 "progress": { 00:17:56.199 "blocks": 86400, 00:17:56.199 "percent": 43 00:17:56.199 } 00:17:56.199 }, 00:17:56.199 "base_bdevs_list": [ 00:17:56.199 { 00:17:56.199 "name": "spare", 00:17:56.199 "uuid": "e845ec7c-6919-50b5-904d-7f5125f307c3", 00:17:56.199 "is_configured": true, 00:17:56.199 "data_offset": 0, 00:17:56.199 "data_size": 65536 00:17:56.199 }, 00:17:56.199 { 00:17:56.199 "name": "BaseBdev2", 00:17:56.199 "uuid": "9273495a-6850-5c61-b68d-cbcbc1ff3028", 00:17:56.199 "is_configured": true, 00:17:56.199 "data_offset": 0, 00:17:56.199 "data_size": 65536 00:17:56.199 }, 00:17:56.199 { 00:17:56.199 "name": "BaseBdev3", 00:17:56.199 "uuid": "1870d484-9c13-5902-9bff-c692e1779c80", 00:17:56.199 "is_configured": true, 00:17:56.199 "data_offset": 0, 00:17:56.199 "data_size": 65536 00:17:56.199 }, 00:17:56.199 { 00:17:56.199 "name": "BaseBdev4", 00:17:56.199 "uuid": "bfd67ea6-5320-5107-b3e2-540f27ea82b1", 00:17:56.199 "is_configured": true, 00:17:56.199 "data_offset": 0, 00:17:56.199 "data_size": 65536 00:17:56.199 } 00:17:56.199 ] 00:17:56.199 }' 00:17:56.199 16:33:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:56.199 16:33:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:56.199 16:33:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:56.199 16:33:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:56.199 16:33:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:57.143 16:33:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:57.143 16:33:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:57.143 16:33:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:57.143 16:33:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:57.143 16:33:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:57.143 16:33:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:57.143 16:33:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.143 16:33:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.143 16:33:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.143 16:33:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.143 16:33:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.143 16:33:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:57.143 "name": "raid_bdev1", 00:17:57.143 "uuid": "0d770191-c412-4cda-ad9a-f66b4038720e", 00:17:57.143 "strip_size_kb": 64, 00:17:57.143 "state": "online", 00:17:57.143 "raid_level": "raid5f", 00:17:57.143 "superblock": false, 00:17:57.143 "num_base_bdevs": 4, 00:17:57.143 "num_base_bdevs_discovered": 4, 00:17:57.143 "num_base_bdevs_operational": 4, 00:17:57.143 "process": { 00:17:57.143 "type": "rebuild", 00:17:57.143 "target": "spare", 00:17:57.143 "progress": { 00:17:57.143 "blocks": 107520, 00:17:57.143 "percent": 54 00:17:57.144 } 00:17:57.144 }, 00:17:57.144 "base_bdevs_list": [ 00:17:57.144 { 00:17:57.144 "name": "spare", 00:17:57.144 "uuid": "e845ec7c-6919-50b5-904d-7f5125f307c3", 00:17:57.144 "is_configured": true, 00:17:57.144 "data_offset": 0, 00:17:57.144 "data_size": 65536 00:17:57.144 }, 00:17:57.144 { 00:17:57.144 "name": "BaseBdev2", 00:17:57.144 "uuid": "9273495a-6850-5c61-b68d-cbcbc1ff3028", 00:17:57.144 "is_configured": true, 00:17:57.144 "data_offset": 0, 00:17:57.144 "data_size": 65536 00:17:57.144 }, 00:17:57.144 { 00:17:57.144 "name": "BaseBdev3", 00:17:57.144 "uuid": "1870d484-9c13-5902-9bff-c692e1779c80", 00:17:57.144 "is_configured": true, 00:17:57.144 "data_offset": 0, 00:17:57.144 "data_size": 65536 00:17:57.144 }, 00:17:57.144 { 00:17:57.144 "name": "BaseBdev4", 00:17:57.144 "uuid": "bfd67ea6-5320-5107-b3e2-540f27ea82b1", 00:17:57.144 "is_configured": true, 00:17:57.144 "data_offset": 0, 00:17:57.144 "data_size": 65536 00:17:57.144 } 00:17:57.144 ] 00:17:57.144 }' 00:17:57.144 16:33:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:57.144 16:33:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:57.144 16:33:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:57.402 16:33:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:57.402 16:33:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:58.338 16:33:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:58.338 16:33:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:58.338 16:33:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:58.338 16:33:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:58.338 16:33:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:58.338 16:33:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:58.338 16:33:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.338 16:33:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.338 16:33:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.338 16:33:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.338 16:33:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.338 16:33:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:58.338 "name": "raid_bdev1", 00:17:58.338 "uuid": "0d770191-c412-4cda-ad9a-f66b4038720e", 00:17:58.338 "strip_size_kb": 64, 00:17:58.338 "state": "online", 00:17:58.338 "raid_level": "raid5f", 00:17:58.338 "superblock": false, 00:17:58.338 "num_base_bdevs": 4, 00:17:58.338 "num_base_bdevs_discovered": 4, 00:17:58.338 "num_base_bdevs_operational": 4, 00:17:58.338 "process": { 00:17:58.338 "type": "rebuild", 00:17:58.338 "target": "spare", 00:17:58.338 "progress": { 00:17:58.338 "blocks": 128640, 00:17:58.338 "percent": 65 00:17:58.338 } 00:17:58.338 }, 00:17:58.338 "base_bdevs_list": [ 00:17:58.338 { 00:17:58.338 "name": "spare", 00:17:58.338 "uuid": "e845ec7c-6919-50b5-904d-7f5125f307c3", 00:17:58.338 "is_configured": true, 00:17:58.338 "data_offset": 0, 00:17:58.338 "data_size": 65536 00:17:58.338 }, 00:17:58.338 { 00:17:58.338 "name": "BaseBdev2", 00:17:58.338 "uuid": "9273495a-6850-5c61-b68d-cbcbc1ff3028", 00:17:58.338 "is_configured": true, 00:17:58.338 "data_offset": 0, 00:17:58.338 "data_size": 65536 00:17:58.338 }, 00:17:58.338 { 00:17:58.338 "name": "BaseBdev3", 00:17:58.338 "uuid": "1870d484-9c13-5902-9bff-c692e1779c80", 00:17:58.338 "is_configured": true, 00:17:58.338 "data_offset": 0, 00:17:58.338 "data_size": 65536 00:17:58.338 }, 00:17:58.338 { 00:17:58.338 "name": "BaseBdev4", 00:17:58.338 "uuid": "bfd67ea6-5320-5107-b3e2-540f27ea82b1", 00:17:58.338 "is_configured": true, 00:17:58.338 "data_offset": 0, 00:17:58.338 "data_size": 65536 00:17:58.338 } 00:17:58.338 ] 00:17:58.338 }' 00:17:58.338 16:33:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:58.338 16:33:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:58.338 16:33:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:58.338 16:33:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:58.338 16:33:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:59.274 16:33:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:59.274 16:33:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:59.274 16:33:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:59.274 16:33:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:59.274 16:33:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:59.274 16:33:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:59.533 16:33:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.533 16:33:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.533 16:33:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.533 16:33:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.533 16:33:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.533 16:33:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:59.533 "name": "raid_bdev1", 00:17:59.533 "uuid": "0d770191-c412-4cda-ad9a-f66b4038720e", 00:17:59.533 "strip_size_kb": 64, 00:17:59.533 "state": "online", 00:17:59.533 "raid_level": "raid5f", 00:17:59.533 "superblock": false, 00:17:59.533 "num_base_bdevs": 4, 00:17:59.533 "num_base_bdevs_discovered": 4, 00:17:59.533 "num_base_bdevs_operational": 4, 00:17:59.533 "process": { 00:17:59.533 "type": "rebuild", 00:17:59.533 "target": "spare", 00:17:59.533 "progress": { 00:17:59.533 "blocks": 149760, 00:17:59.533 "percent": 76 00:17:59.533 } 00:17:59.533 }, 00:17:59.533 "base_bdevs_list": [ 00:17:59.533 { 00:17:59.533 "name": "spare", 00:17:59.533 "uuid": "e845ec7c-6919-50b5-904d-7f5125f307c3", 00:17:59.533 "is_configured": true, 00:17:59.533 "data_offset": 0, 00:17:59.533 "data_size": 65536 00:17:59.533 }, 00:17:59.533 { 00:17:59.533 "name": "BaseBdev2", 00:17:59.533 "uuid": "9273495a-6850-5c61-b68d-cbcbc1ff3028", 00:17:59.533 "is_configured": true, 00:17:59.533 "data_offset": 0, 00:17:59.533 "data_size": 65536 00:17:59.533 }, 00:17:59.533 { 00:17:59.533 "name": "BaseBdev3", 00:17:59.533 "uuid": "1870d484-9c13-5902-9bff-c692e1779c80", 00:17:59.533 "is_configured": true, 00:17:59.533 "data_offset": 0, 00:17:59.533 "data_size": 65536 00:17:59.533 }, 00:17:59.533 { 00:17:59.533 "name": "BaseBdev4", 00:17:59.534 "uuid": "bfd67ea6-5320-5107-b3e2-540f27ea82b1", 00:17:59.534 "is_configured": true, 00:17:59.534 "data_offset": 0, 00:17:59.534 "data_size": 65536 00:17:59.534 } 00:17:59.534 ] 00:17:59.534 }' 00:17:59.534 16:33:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:59.534 16:33:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:59.534 16:33:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:59.534 16:33:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:59.534 16:33:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:00.472 16:33:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:00.472 16:33:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:00.472 16:33:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:00.472 16:33:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:00.472 16:33:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:00.472 16:33:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:00.472 16:33:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.472 16:33:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.472 16:33:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.472 16:33:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.472 16:33:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.472 16:33:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:00.472 "name": "raid_bdev1", 00:18:00.472 "uuid": "0d770191-c412-4cda-ad9a-f66b4038720e", 00:18:00.472 "strip_size_kb": 64, 00:18:00.472 "state": "online", 00:18:00.472 "raid_level": "raid5f", 00:18:00.472 "superblock": false, 00:18:00.472 "num_base_bdevs": 4, 00:18:00.472 "num_base_bdevs_discovered": 4, 00:18:00.472 "num_base_bdevs_operational": 4, 00:18:00.472 "process": { 00:18:00.472 "type": "rebuild", 00:18:00.472 "target": "spare", 00:18:00.472 "progress": { 00:18:00.472 "blocks": 172800, 00:18:00.472 "percent": 87 00:18:00.472 } 00:18:00.472 }, 00:18:00.472 "base_bdevs_list": [ 00:18:00.472 { 00:18:00.472 "name": "spare", 00:18:00.472 "uuid": "e845ec7c-6919-50b5-904d-7f5125f307c3", 00:18:00.472 "is_configured": true, 00:18:00.472 "data_offset": 0, 00:18:00.472 "data_size": 65536 00:18:00.472 }, 00:18:00.472 { 00:18:00.472 "name": "BaseBdev2", 00:18:00.472 "uuid": "9273495a-6850-5c61-b68d-cbcbc1ff3028", 00:18:00.472 "is_configured": true, 00:18:00.472 "data_offset": 0, 00:18:00.472 "data_size": 65536 00:18:00.472 }, 00:18:00.472 { 00:18:00.472 "name": "BaseBdev3", 00:18:00.472 "uuid": "1870d484-9c13-5902-9bff-c692e1779c80", 00:18:00.472 "is_configured": true, 00:18:00.472 "data_offset": 0, 00:18:00.472 "data_size": 65536 00:18:00.472 }, 00:18:00.472 { 00:18:00.472 "name": "BaseBdev4", 00:18:00.472 "uuid": "bfd67ea6-5320-5107-b3e2-540f27ea82b1", 00:18:00.472 "is_configured": true, 00:18:00.472 "data_offset": 0, 00:18:00.472 "data_size": 65536 00:18:00.472 } 00:18:00.472 ] 00:18:00.472 }' 00:18:00.472 16:33:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:00.732 16:33:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:00.732 16:33:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:00.732 16:33:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:00.732 16:33:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:01.670 16:33:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:01.670 16:33:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:01.670 16:33:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:01.670 16:33:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:01.670 16:33:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:01.670 16:33:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:01.670 16:33:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.670 16:33:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.670 16:33:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.670 16:33:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.670 16:33:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.670 16:33:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:01.670 "name": "raid_bdev1", 00:18:01.670 "uuid": "0d770191-c412-4cda-ad9a-f66b4038720e", 00:18:01.670 "strip_size_kb": 64, 00:18:01.670 "state": "online", 00:18:01.670 "raid_level": "raid5f", 00:18:01.670 "superblock": false, 00:18:01.670 "num_base_bdevs": 4, 00:18:01.670 "num_base_bdevs_discovered": 4, 00:18:01.670 "num_base_bdevs_operational": 4, 00:18:01.670 "process": { 00:18:01.670 "type": "rebuild", 00:18:01.670 "target": "spare", 00:18:01.670 "progress": { 00:18:01.670 "blocks": 193920, 00:18:01.670 "percent": 98 00:18:01.670 } 00:18:01.670 }, 00:18:01.670 "base_bdevs_list": [ 00:18:01.670 { 00:18:01.670 "name": "spare", 00:18:01.670 "uuid": "e845ec7c-6919-50b5-904d-7f5125f307c3", 00:18:01.670 "is_configured": true, 00:18:01.670 "data_offset": 0, 00:18:01.670 "data_size": 65536 00:18:01.670 }, 00:18:01.670 { 00:18:01.670 "name": "BaseBdev2", 00:18:01.670 "uuid": "9273495a-6850-5c61-b68d-cbcbc1ff3028", 00:18:01.670 "is_configured": true, 00:18:01.670 "data_offset": 0, 00:18:01.670 "data_size": 65536 00:18:01.670 }, 00:18:01.670 { 00:18:01.670 "name": "BaseBdev3", 00:18:01.670 "uuid": "1870d484-9c13-5902-9bff-c692e1779c80", 00:18:01.670 "is_configured": true, 00:18:01.671 "data_offset": 0, 00:18:01.671 "data_size": 65536 00:18:01.671 }, 00:18:01.671 { 00:18:01.671 "name": "BaseBdev4", 00:18:01.671 "uuid": "bfd67ea6-5320-5107-b3e2-540f27ea82b1", 00:18:01.671 "is_configured": true, 00:18:01.671 "data_offset": 0, 00:18:01.671 "data_size": 65536 00:18:01.671 } 00:18:01.671 ] 00:18:01.671 }' 00:18:01.671 16:33:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:01.671 16:33:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:01.671 16:33:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:01.671 16:33:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:01.671 16:33:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:01.929 [2024-12-06 16:33:43.535840] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:01.929 [2024-12-06 16:33:43.535917] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:01.929 [2024-12-06 16:33:43.535956] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.865 16:33:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:02.865 16:33:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:02.865 16:33:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:02.866 16:33:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:02.866 16:33:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:02.866 16:33:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:02.866 16:33:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.866 16:33:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.866 16:33:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.866 16:33:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.866 16:33:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.866 16:33:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:02.866 "name": "raid_bdev1", 00:18:02.866 "uuid": "0d770191-c412-4cda-ad9a-f66b4038720e", 00:18:02.866 "strip_size_kb": 64, 00:18:02.866 "state": "online", 00:18:02.866 "raid_level": "raid5f", 00:18:02.866 "superblock": false, 00:18:02.866 "num_base_bdevs": 4, 00:18:02.866 "num_base_bdevs_discovered": 4, 00:18:02.866 "num_base_bdevs_operational": 4, 00:18:02.866 "base_bdevs_list": [ 00:18:02.866 { 00:18:02.866 "name": "spare", 00:18:02.866 "uuid": "e845ec7c-6919-50b5-904d-7f5125f307c3", 00:18:02.866 "is_configured": true, 00:18:02.866 "data_offset": 0, 00:18:02.866 "data_size": 65536 00:18:02.866 }, 00:18:02.866 { 00:18:02.866 "name": "BaseBdev2", 00:18:02.866 "uuid": "9273495a-6850-5c61-b68d-cbcbc1ff3028", 00:18:02.866 "is_configured": true, 00:18:02.866 "data_offset": 0, 00:18:02.866 "data_size": 65536 00:18:02.866 }, 00:18:02.866 { 00:18:02.866 "name": "BaseBdev3", 00:18:02.866 "uuid": "1870d484-9c13-5902-9bff-c692e1779c80", 00:18:02.866 "is_configured": true, 00:18:02.866 "data_offset": 0, 00:18:02.866 "data_size": 65536 00:18:02.866 }, 00:18:02.866 { 00:18:02.866 "name": "BaseBdev4", 00:18:02.866 "uuid": "bfd67ea6-5320-5107-b3e2-540f27ea82b1", 00:18:02.866 "is_configured": true, 00:18:02.866 "data_offset": 0, 00:18:02.866 "data_size": 65536 00:18:02.866 } 00:18:02.866 ] 00:18:02.866 }' 00:18:02.866 16:33:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:02.866 16:33:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:02.866 16:33:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:02.866 16:33:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:02.866 16:33:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:18:02.866 16:33:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:02.866 16:33:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:02.866 16:33:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:02.866 16:33:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:02.866 16:33:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:02.866 16:33:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.866 16:33:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.866 16:33:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.866 16:33:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.866 16:33:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.866 16:33:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:02.866 "name": "raid_bdev1", 00:18:02.866 "uuid": "0d770191-c412-4cda-ad9a-f66b4038720e", 00:18:02.866 "strip_size_kb": 64, 00:18:02.866 "state": "online", 00:18:02.866 "raid_level": "raid5f", 00:18:02.866 "superblock": false, 00:18:02.866 "num_base_bdevs": 4, 00:18:02.866 "num_base_bdevs_discovered": 4, 00:18:02.866 "num_base_bdevs_operational": 4, 00:18:02.866 "base_bdevs_list": [ 00:18:02.866 { 00:18:02.866 "name": "spare", 00:18:02.866 "uuid": "e845ec7c-6919-50b5-904d-7f5125f307c3", 00:18:02.866 "is_configured": true, 00:18:02.866 "data_offset": 0, 00:18:02.866 "data_size": 65536 00:18:02.866 }, 00:18:02.866 { 00:18:02.866 "name": "BaseBdev2", 00:18:02.866 "uuid": "9273495a-6850-5c61-b68d-cbcbc1ff3028", 00:18:02.866 "is_configured": true, 00:18:02.866 "data_offset": 0, 00:18:02.866 "data_size": 65536 00:18:02.866 }, 00:18:02.866 { 00:18:02.866 "name": "BaseBdev3", 00:18:02.866 "uuid": "1870d484-9c13-5902-9bff-c692e1779c80", 00:18:02.866 "is_configured": true, 00:18:02.866 "data_offset": 0, 00:18:02.866 "data_size": 65536 00:18:02.866 }, 00:18:02.866 { 00:18:02.866 "name": "BaseBdev4", 00:18:02.866 "uuid": "bfd67ea6-5320-5107-b3e2-540f27ea82b1", 00:18:02.866 "is_configured": true, 00:18:02.866 "data_offset": 0, 00:18:02.866 "data_size": 65536 00:18:02.866 } 00:18:02.866 ] 00:18:02.866 }' 00:18:02.866 16:33:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:03.125 16:33:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:03.125 16:33:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:03.125 16:33:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:03.125 16:33:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:03.125 16:33:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:03.125 16:33:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:03.125 16:33:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:03.125 16:33:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:03.125 16:33:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:03.125 16:33:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.126 16:33:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.126 16:33:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.126 16:33:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.126 16:33:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.126 16:33:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.126 16:33:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.126 16:33:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.126 16:33:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.126 16:33:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.126 "name": "raid_bdev1", 00:18:03.126 "uuid": "0d770191-c412-4cda-ad9a-f66b4038720e", 00:18:03.126 "strip_size_kb": 64, 00:18:03.126 "state": "online", 00:18:03.126 "raid_level": "raid5f", 00:18:03.126 "superblock": false, 00:18:03.126 "num_base_bdevs": 4, 00:18:03.126 "num_base_bdevs_discovered": 4, 00:18:03.126 "num_base_bdevs_operational": 4, 00:18:03.126 "base_bdevs_list": [ 00:18:03.126 { 00:18:03.126 "name": "spare", 00:18:03.126 "uuid": "e845ec7c-6919-50b5-904d-7f5125f307c3", 00:18:03.126 "is_configured": true, 00:18:03.126 "data_offset": 0, 00:18:03.126 "data_size": 65536 00:18:03.126 }, 00:18:03.126 { 00:18:03.126 "name": "BaseBdev2", 00:18:03.126 "uuid": "9273495a-6850-5c61-b68d-cbcbc1ff3028", 00:18:03.126 "is_configured": true, 00:18:03.126 "data_offset": 0, 00:18:03.126 "data_size": 65536 00:18:03.126 }, 00:18:03.126 { 00:18:03.126 "name": "BaseBdev3", 00:18:03.126 "uuid": "1870d484-9c13-5902-9bff-c692e1779c80", 00:18:03.126 "is_configured": true, 00:18:03.126 "data_offset": 0, 00:18:03.126 "data_size": 65536 00:18:03.126 }, 00:18:03.126 { 00:18:03.126 "name": "BaseBdev4", 00:18:03.126 "uuid": "bfd67ea6-5320-5107-b3e2-540f27ea82b1", 00:18:03.126 "is_configured": true, 00:18:03.126 "data_offset": 0, 00:18:03.126 "data_size": 65536 00:18:03.126 } 00:18:03.126 ] 00:18:03.126 }' 00:18:03.126 16:33:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.126 16:33:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.390 16:33:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:03.390 16:33:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.390 16:33:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.391 [2024-12-06 16:33:45.202922] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:03.391 [2024-12-06 16:33:45.202955] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:03.391 [2024-12-06 16:33:45.203048] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:03.391 [2024-12-06 16:33:45.203146] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:03.391 [2024-12-06 16:33:45.203162] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:18:03.391 16:33:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.391 16:33:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.391 16:33:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.391 16:33:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:18:03.391 16:33:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.391 16:33:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.653 16:33:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:03.653 16:33:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:03.653 16:33:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:03.653 16:33:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:03.653 16:33:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:03.653 16:33:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:03.653 16:33:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:03.653 16:33:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:03.653 16:33:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:03.653 16:33:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:03.653 16:33:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:03.653 16:33:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:03.653 16:33:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:03.653 /dev/nbd0 00:18:03.653 16:33:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:03.911 16:33:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:03.911 16:33:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:03.911 16:33:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:03.911 16:33:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:03.911 16:33:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:03.911 16:33:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:03.911 16:33:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:03.911 16:33:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:03.911 16:33:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:03.911 16:33:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:03.911 1+0 records in 00:18:03.911 1+0 records out 00:18:03.911 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000339345 s, 12.1 MB/s 00:18:03.911 16:33:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:03.911 16:33:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:03.911 16:33:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:03.911 16:33:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:03.911 16:33:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:03.911 16:33:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:03.911 16:33:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:03.911 16:33:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:03.911 /dev/nbd1 00:18:03.911 16:33:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:03.911 16:33:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:03.911 16:33:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:03.911 16:33:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:03.911 16:33:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:03.911 16:33:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:03.911 16:33:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:03.911 16:33:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:03.911 16:33:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:03.911 16:33:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:03.911 16:33:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:04.169 1+0 records in 00:18:04.169 1+0 records out 00:18:04.169 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028323 s, 14.5 MB/s 00:18:04.169 16:33:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:04.169 16:33:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:04.169 16:33:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:04.169 16:33:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:04.169 16:33:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:04.169 16:33:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:04.169 16:33:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:04.169 16:33:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:04.169 16:33:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:04.169 16:33:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:04.169 16:33:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:04.169 16:33:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:04.169 16:33:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:04.169 16:33:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:04.169 16:33:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:04.426 16:33:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:04.426 16:33:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:04.426 16:33:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:04.426 16:33:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:04.426 16:33:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:04.426 16:33:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:04.426 16:33:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:04.426 16:33:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:04.426 16:33:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:04.426 16:33:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:04.685 16:33:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:04.685 16:33:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:04.685 16:33:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:04.685 16:33:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:04.685 16:33:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:04.685 16:33:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:04.685 16:33:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:04.685 16:33:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:04.685 16:33:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:18:04.685 16:33:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 95507 00:18:04.685 16:33:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 95507 ']' 00:18:04.685 16:33:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 95507 00:18:04.685 16:33:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:18:04.685 16:33:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:04.685 16:33:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95507 00:18:04.685 killing process with pid 95507 00:18:04.685 Received shutdown signal, test time was about 60.000000 seconds 00:18:04.685 00:18:04.685 Latency(us) 00:18:04.685 [2024-12-06T16:33:46.524Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.685 [2024-12-06T16:33:46.524Z] =================================================================================================================== 00:18:04.685 [2024-12-06T16:33:46.524Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:04.685 16:33:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:04.685 16:33:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:04.685 16:33:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95507' 00:18:04.685 16:33:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 95507 00:18:04.685 [2024-12-06 16:33:46.328384] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:04.685 16:33:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 95507 00:18:04.685 [2024-12-06 16:33:46.377282] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:04.944 16:33:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:18:04.944 00:18:04.944 real 0m18.260s 00:18:04.944 user 0m22.030s 00:18:04.944 sys 0m2.261s 00:18:04.944 16:33:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:04.944 16:33:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.944 ************************************ 00:18:04.944 END TEST raid5f_rebuild_test 00:18:04.944 ************************************ 00:18:04.944 16:33:46 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:18:04.944 16:33:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:04.944 16:33:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:04.944 16:33:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:04.944 ************************************ 00:18:04.944 START TEST raid5f_rebuild_test_sb 00:18:04.944 ************************************ 00:18:04.944 16:33:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:18:04.944 16:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:18:04.944 16:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:04.944 16:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:04.944 16:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:04.944 16:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:04.944 16:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:04.944 16:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:04.944 16:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:04.944 16:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:04.944 16:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:04.944 16:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:04.944 16:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:04.944 16:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:04.944 16:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:04.944 16:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:04.944 16:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:04.944 16:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:04.944 16:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:04.944 16:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:04.944 16:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:04.944 16:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:04.944 16:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:04.944 16:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:04.944 16:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:04.944 16:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:04.944 16:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:04.944 16:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:18:04.944 16:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:18:04.944 16:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:18:04.944 16:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:18:04.944 16:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:04.944 16:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:04.944 16:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=96013 00:18:04.944 16:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:04.944 16:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 96013 00:18:04.944 16:33:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 96013 ']' 00:18:04.944 16:33:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.944 16:33:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:04.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:04.944 16:33:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.944 16:33:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:04.944 16:33:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.944 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:04.944 Zero copy mechanism will not be used. 00:18:04.944 [2024-12-06 16:33:46.742904] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:18:04.944 [2024-12-06 16:33:46.743038] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96013 ] 00:18:05.204 [2024-12-06 16:33:46.915230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.204 [2024-12-06 16:33:46.941250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.204 [2024-12-06 16:33:46.985489] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:05.204 [2024-12-06 16:33:46.985529] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:06.160 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:06.160 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:18:06.160 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:06.160 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:06.160 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.160 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.160 BaseBdev1_malloc 00:18:06.160 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.160 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:06.160 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.160 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.160 [2024-12-06 16:33:47.640985] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:06.160 [2024-12-06 16:33:47.641049] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.160 [2024-12-06 16:33:47.641088] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:06.160 [2024-12-06 16:33:47.641101] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.160 [2024-12-06 16:33:47.643228] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.160 [2024-12-06 16:33:47.643268] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:06.160 BaseBdev1 00:18:06.160 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.160 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:06.160 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:06.160 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.160 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.160 BaseBdev2_malloc 00:18:06.160 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.160 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:06.160 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.160 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.160 [2024-12-06 16:33:47.669440] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:06.160 [2024-12-06 16:33:47.669488] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.160 [2024-12-06 16:33:47.669524] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:06.160 [2024-12-06 16:33:47.669533] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.160 [2024-12-06 16:33:47.671501] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.160 [2024-12-06 16:33:47.671534] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:06.160 BaseBdev2 00:18:06.160 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.160 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:06.160 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:06.160 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.160 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.160 BaseBdev3_malloc 00:18:06.160 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.160 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:06.160 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.160 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.160 [2024-12-06 16:33:47.697892] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:06.160 [2024-12-06 16:33:47.697941] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.160 [2024-12-06 16:33:47.697962] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:06.160 [2024-12-06 16:33:47.697971] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.160 [2024-12-06 16:33:47.699999] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.160 [2024-12-06 16:33:47.700032] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:06.160 BaseBdev3 00:18:06.160 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.160 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:06.160 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:06.160 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.160 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.161 BaseBdev4_malloc 00:18:06.161 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.161 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:06.161 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.161 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.161 [2024-12-06 16:33:47.738318] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:06.161 [2024-12-06 16:33:47.738369] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.161 [2024-12-06 16:33:47.738392] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:06.161 [2024-12-06 16:33:47.738400] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.161 [2024-12-06 16:33:47.740364] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.161 [2024-12-06 16:33:47.740465] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:06.161 BaseBdev4 00:18:06.161 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.161 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:06.161 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.161 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.161 spare_malloc 00:18:06.161 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.161 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:06.161 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.161 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.161 spare_delay 00:18:06.161 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.161 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:06.161 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.161 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.161 [2024-12-06 16:33:47.778806] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:06.161 [2024-12-06 16:33:47.778854] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.161 [2024-12-06 16:33:47.778889] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:06.161 [2024-12-06 16:33:47.778897] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.161 [2024-12-06 16:33:47.781096] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.161 [2024-12-06 16:33:47.781134] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:06.161 spare 00:18:06.161 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.161 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:18:06.161 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.161 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.161 [2024-12-06 16:33:47.790854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:06.161 [2024-12-06 16:33:47.792821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:06.161 [2024-12-06 16:33:47.792889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:06.161 [2024-12-06 16:33:47.792942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:06.161 [2024-12-06 16:33:47.793107] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:18:06.161 [2024-12-06 16:33:47.793119] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:06.161 [2024-12-06 16:33:47.793369] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:06.161 [2024-12-06 16:33:47.793816] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:18:06.161 [2024-12-06 16:33:47.793837] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:18:06.161 [2024-12-06 16:33:47.793971] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:06.161 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.161 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:06.161 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:06.161 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.161 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:06.161 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:06.161 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:06.161 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.161 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.161 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.161 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.161 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.161 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.161 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.161 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.161 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.161 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.161 "name": "raid_bdev1", 00:18:06.161 "uuid": "4ab232dc-fe50-4721-8def-5182b3aa0691", 00:18:06.161 "strip_size_kb": 64, 00:18:06.161 "state": "online", 00:18:06.161 "raid_level": "raid5f", 00:18:06.161 "superblock": true, 00:18:06.161 "num_base_bdevs": 4, 00:18:06.161 "num_base_bdevs_discovered": 4, 00:18:06.161 "num_base_bdevs_operational": 4, 00:18:06.161 "base_bdevs_list": [ 00:18:06.161 { 00:18:06.161 "name": "BaseBdev1", 00:18:06.161 "uuid": "766f4c37-f1d8-5cb0-8a69-fddd3868649f", 00:18:06.161 "is_configured": true, 00:18:06.161 "data_offset": 2048, 00:18:06.161 "data_size": 63488 00:18:06.161 }, 00:18:06.161 { 00:18:06.161 "name": "BaseBdev2", 00:18:06.161 "uuid": "28b3a80e-7999-516f-90ad-974a553369e7", 00:18:06.161 "is_configured": true, 00:18:06.161 "data_offset": 2048, 00:18:06.161 "data_size": 63488 00:18:06.161 }, 00:18:06.161 { 00:18:06.161 "name": "BaseBdev3", 00:18:06.161 "uuid": "8d29573c-6ae1-5922-a49d-c50c09a1a045", 00:18:06.161 "is_configured": true, 00:18:06.161 "data_offset": 2048, 00:18:06.161 "data_size": 63488 00:18:06.161 }, 00:18:06.161 { 00:18:06.161 "name": "BaseBdev4", 00:18:06.161 "uuid": "1138a95d-82b9-5175-807f-dfae29a0f109", 00:18:06.161 "is_configured": true, 00:18:06.161 "data_offset": 2048, 00:18:06.161 "data_size": 63488 00:18:06.161 } 00:18:06.161 ] 00:18:06.161 }' 00:18:06.161 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.161 16:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.471 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:06.471 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.471 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:06.471 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.471 [2024-12-06 16:33:48.235212] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:06.471 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.471 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:18:06.471 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:06.471 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.471 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.471 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.471 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.730 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:18:06.730 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:06.730 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:06.730 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:06.730 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:06.730 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:06.730 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:06.730 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:06.730 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:06.730 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:06.730 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:06.730 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:06.730 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:06.730 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:06.730 [2024-12-06 16:33:48.502626] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:06.730 /dev/nbd0 00:18:06.730 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:06.730 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:06.730 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:06.730 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:06.730 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:06.730 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:06.730 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:06.730 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:06.730 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:06.730 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:06.730 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:06.730 1+0 records in 00:18:06.731 1+0 records out 00:18:06.731 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000564385 s, 7.3 MB/s 00:18:06.731 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:06.731 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:06.991 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:06.991 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:06.991 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:06.991 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:06.991 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:06.991 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:18:06.991 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:18:06.991 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:18:06.991 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:18:07.250 496+0 records in 00:18:07.250 496+0 records out 00:18:07.250 97517568 bytes (98 MB, 93 MiB) copied, 0.375129 s, 260 MB/s 00:18:07.250 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:07.250 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:07.250 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:07.250 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:07.250 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:07.250 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:07.250 16:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:07.509 [2024-12-06 16:33:49.150726] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:07.509 16:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:07.509 16:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:07.509 16:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:07.509 16:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:07.509 16:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:07.509 16:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:07.509 16:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:07.509 16:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:07.509 16:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:07.509 16:33:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.509 16:33:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.509 [2024-12-06 16:33:49.186711] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:07.509 16:33:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.509 16:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:07.509 16:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:07.509 16:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:07.509 16:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:07.509 16:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:07.509 16:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:07.509 16:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.509 16:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.509 16:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.509 16:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.509 16:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.510 16:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.510 16:33:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.510 16:33:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.510 16:33:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.510 16:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.510 "name": "raid_bdev1", 00:18:07.510 "uuid": "4ab232dc-fe50-4721-8def-5182b3aa0691", 00:18:07.510 "strip_size_kb": 64, 00:18:07.510 "state": "online", 00:18:07.510 "raid_level": "raid5f", 00:18:07.510 "superblock": true, 00:18:07.510 "num_base_bdevs": 4, 00:18:07.510 "num_base_bdevs_discovered": 3, 00:18:07.510 "num_base_bdevs_operational": 3, 00:18:07.510 "base_bdevs_list": [ 00:18:07.510 { 00:18:07.510 "name": null, 00:18:07.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.510 "is_configured": false, 00:18:07.510 "data_offset": 0, 00:18:07.510 "data_size": 63488 00:18:07.510 }, 00:18:07.510 { 00:18:07.510 "name": "BaseBdev2", 00:18:07.510 "uuid": "28b3a80e-7999-516f-90ad-974a553369e7", 00:18:07.510 "is_configured": true, 00:18:07.510 "data_offset": 2048, 00:18:07.510 "data_size": 63488 00:18:07.510 }, 00:18:07.510 { 00:18:07.510 "name": "BaseBdev3", 00:18:07.510 "uuid": "8d29573c-6ae1-5922-a49d-c50c09a1a045", 00:18:07.510 "is_configured": true, 00:18:07.510 "data_offset": 2048, 00:18:07.510 "data_size": 63488 00:18:07.510 }, 00:18:07.510 { 00:18:07.510 "name": "BaseBdev4", 00:18:07.510 "uuid": "1138a95d-82b9-5175-807f-dfae29a0f109", 00:18:07.510 "is_configured": true, 00:18:07.510 "data_offset": 2048, 00:18:07.510 "data_size": 63488 00:18:07.510 } 00:18:07.510 ] 00:18:07.510 }' 00:18:07.510 16:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.510 16:33:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.769 16:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:07.769 16:33:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.769 16:33:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.769 [2024-12-06 16:33:49.602097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:07.769 [2024-12-06 16:33:49.606656] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a8b0 00:18:08.027 16:33:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.027 16:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:08.027 [2024-12-06 16:33:49.609389] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:08.966 16:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:08.966 16:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:08.966 16:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:08.966 16:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:08.966 16:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:08.966 16:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.966 16:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.966 16:33:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.966 16:33:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.966 16:33:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.966 16:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:08.966 "name": "raid_bdev1", 00:18:08.966 "uuid": "4ab232dc-fe50-4721-8def-5182b3aa0691", 00:18:08.966 "strip_size_kb": 64, 00:18:08.966 "state": "online", 00:18:08.966 "raid_level": "raid5f", 00:18:08.966 "superblock": true, 00:18:08.966 "num_base_bdevs": 4, 00:18:08.966 "num_base_bdevs_discovered": 4, 00:18:08.966 "num_base_bdevs_operational": 4, 00:18:08.966 "process": { 00:18:08.966 "type": "rebuild", 00:18:08.966 "target": "spare", 00:18:08.966 "progress": { 00:18:08.966 "blocks": 19200, 00:18:08.966 "percent": 10 00:18:08.966 } 00:18:08.966 }, 00:18:08.966 "base_bdevs_list": [ 00:18:08.966 { 00:18:08.966 "name": "spare", 00:18:08.966 "uuid": "b0d2984a-3723-5e77-8ae9-a80728a2048d", 00:18:08.966 "is_configured": true, 00:18:08.966 "data_offset": 2048, 00:18:08.966 "data_size": 63488 00:18:08.966 }, 00:18:08.966 { 00:18:08.966 "name": "BaseBdev2", 00:18:08.966 "uuid": "28b3a80e-7999-516f-90ad-974a553369e7", 00:18:08.966 "is_configured": true, 00:18:08.966 "data_offset": 2048, 00:18:08.966 "data_size": 63488 00:18:08.966 }, 00:18:08.966 { 00:18:08.966 "name": "BaseBdev3", 00:18:08.966 "uuid": "8d29573c-6ae1-5922-a49d-c50c09a1a045", 00:18:08.966 "is_configured": true, 00:18:08.966 "data_offset": 2048, 00:18:08.966 "data_size": 63488 00:18:08.966 }, 00:18:08.966 { 00:18:08.966 "name": "BaseBdev4", 00:18:08.966 "uuid": "1138a95d-82b9-5175-807f-dfae29a0f109", 00:18:08.966 "is_configured": true, 00:18:08.966 "data_offset": 2048, 00:18:08.966 "data_size": 63488 00:18:08.966 } 00:18:08.966 ] 00:18:08.966 }' 00:18:08.966 16:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:08.966 16:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:08.966 16:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:08.966 16:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:08.966 16:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:08.966 16:33:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.966 16:33:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.966 [2024-12-06 16:33:50.744796] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:09.232 [2024-12-06 16:33:50.815834] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:09.232 [2024-12-06 16:33:50.815915] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:09.232 [2024-12-06 16:33:50.815934] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:09.232 [2024-12-06 16:33:50.815941] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:09.232 16:33:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.232 16:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:09.232 16:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:09.232 16:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:09.232 16:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:09.232 16:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:09.233 16:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:09.233 16:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.233 16:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.233 16:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.233 16:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.233 16:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.233 16:33:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.233 16:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.233 16:33:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.233 16:33:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.233 16:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.233 "name": "raid_bdev1", 00:18:09.233 "uuid": "4ab232dc-fe50-4721-8def-5182b3aa0691", 00:18:09.233 "strip_size_kb": 64, 00:18:09.233 "state": "online", 00:18:09.233 "raid_level": "raid5f", 00:18:09.233 "superblock": true, 00:18:09.233 "num_base_bdevs": 4, 00:18:09.233 "num_base_bdevs_discovered": 3, 00:18:09.233 "num_base_bdevs_operational": 3, 00:18:09.233 "base_bdevs_list": [ 00:18:09.233 { 00:18:09.233 "name": null, 00:18:09.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.233 "is_configured": false, 00:18:09.233 "data_offset": 0, 00:18:09.233 "data_size": 63488 00:18:09.233 }, 00:18:09.233 { 00:18:09.233 "name": "BaseBdev2", 00:18:09.233 "uuid": "28b3a80e-7999-516f-90ad-974a553369e7", 00:18:09.233 "is_configured": true, 00:18:09.233 "data_offset": 2048, 00:18:09.233 "data_size": 63488 00:18:09.233 }, 00:18:09.233 { 00:18:09.233 "name": "BaseBdev3", 00:18:09.234 "uuid": "8d29573c-6ae1-5922-a49d-c50c09a1a045", 00:18:09.234 "is_configured": true, 00:18:09.234 "data_offset": 2048, 00:18:09.234 "data_size": 63488 00:18:09.234 }, 00:18:09.234 { 00:18:09.234 "name": "BaseBdev4", 00:18:09.234 "uuid": "1138a95d-82b9-5175-807f-dfae29a0f109", 00:18:09.234 "is_configured": true, 00:18:09.234 "data_offset": 2048, 00:18:09.234 "data_size": 63488 00:18:09.234 } 00:18:09.234 ] 00:18:09.234 }' 00:18:09.234 16:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.234 16:33:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.497 16:33:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:09.497 16:33:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:09.497 16:33:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:09.498 16:33:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:09.498 16:33:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:09.498 16:33:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.498 16:33:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.498 16:33:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.498 16:33:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.498 16:33:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.498 16:33:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:09.498 "name": "raid_bdev1", 00:18:09.498 "uuid": "4ab232dc-fe50-4721-8def-5182b3aa0691", 00:18:09.498 "strip_size_kb": 64, 00:18:09.498 "state": "online", 00:18:09.498 "raid_level": "raid5f", 00:18:09.498 "superblock": true, 00:18:09.498 "num_base_bdevs": 4, 00:18:09.498 "num_base_bdevs_discovered": 3, 00:18:09.498 "num_base_bdevs_operational": 3, 00:18:09.498 "base_bdevs_list": [ 00:18:09.498 { 00:18:09.498 "name": null, 00:18:09.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.498 "is_configured": false, 00:18:09.498 "data_offset": 0, 00:18:09.498 "data_size": 63488 00:18:09.498 }, 00:18:09.498 { 00:18:09.498 "name": "BaseBdev2", 00:18:09.498 "uuid": "28b3a80e-7999-516f-90ad-974a553369e7", 00:18:09.498 "is_configured": true, 00:18:09.498 "data_offset": 2048, 00:18:09.498 "data_size": 63488 00:18:09.498 }, 00:18:09.498 { 00:18:09.498 "name": "BaseBdev3", 00:18:09.498 "uuid": "8d29573c-6ae1-5922-a49d-c50c09a1a045", 00:18:09.498 "is_configured": true, 00:18:09.498 "data_offset": 2048, 00:18:09.498 "data_size": 63488 00:18:09.498 }, 00:18:09.498 { 00:18:09.498 "name": "BaseBdev4", 00:18:09.498 "uuid": "1138a95d-82b9-5175-807f-dfae29a0f109", 00:18:09.498 "is_configured": true, 00:18:09.498 "data_offset": 2048, 00:18:09.498 "data_size": 63488 00:18:09.498 } 00:18:09.498 ] 00:18:09.498 }' 00:18:09.498 16:33:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:09.758 16:33:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:09.758 16:33:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:09.758 16:33:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:09.758 16:33:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:09.758 16:33:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.758 16:33:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.758 [2024-12-06 16:33:51.428888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:09.758 [2024-12-06 16:33:51.433042] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a980 00:18:09.758 16:33:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.758 16:33:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:09.758 [2024-12-06 16:33:51.435295] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:10.722 16:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:10.722 16:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:10.722 16:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:10.722 16:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:10.722 16:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:10.722 16:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.722 16:33:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.722 16:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.722 16:33:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.722 16:33:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.722 16:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:10.722 "name": "raid_bdev1", 00:18:10.722 "uuid": "4ab232dc-fe50-4721-8def-5182b3aa0691", 00:18:10.722 "strip_size_kb": 64, 00:18:10.722 "state": "online", 00:18:10.722 "raid_level": "raid5f", 00:18:10.722 "superblock": true, 00:18:10.722 "num_base_bdevs": 4, 00:18:10.722 "num_base_bdevs_discovered": 4, 00:18:10.722 "num_base_bdevs_operational": 4, 00:18:10.722 "process": { 00:18:10.722 "type": "rebuild", 00:18:10.722 "target": "spare", 00:18:10.722 "progress": { 00:18:10.722 "blocks": 19200, 00:18:10.722 "percent": 10 00:18:10.722 } 00:18:10.722 }, 00:18:10.722 "base_bdevs_list": [ 00:18:10.722 { 00:18:10.722 "name": "spare", 00:18:10.722 "uuid": "b0d2984a-3723-5e77-8ae9-a80728a2048d", 00:18:10.722 "is_configured": true, 00:18:10.722 "data_offset": 2048, 00:18:10.722 "data_size": 63488 00:18:10.722 }, 00:18:10.722 { 00:18:10.722 "name": "BaseBdev2", 00:18:10.722 "uuid": "28b3a80e-7999-516f-90ad-974a553369e7", 00:18:10.722 "is_configured": true, 00:18:10.722 "data_offset": 2048, 00:18:10.722 "data_size": 63488 00:18:10.722 }, 00:18:10.722 { 00:18:10.722 "name": "BaseBdev3", 00:18:10.722 "uuid": "8d29573c-6ae1-5922-a49d-c50c09a1a045", 00:18:10.722 "is_configured": true, 00:18:10.722 "data_offset": 2048, 00:18:10.722 "data_size": 63488 00:18:10.722 }, 00:18:10.722 { 00:18:10.722 "name": "BaseBdev4", 00:18:10.722 "uuid": "1138a95d-82b9-5175-807f-dfae29a0f109", 00:18:10.722 "is_configured": true, 00:18:10.722 "data_offset": 2048, 00:18:10.722 "data_size": 63488 00:18:10.722 } 00:18:10.722 ] 00:18:10.722 }' 00:18:10.722 16:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:10.722 16:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:10.722 16:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:10.981 16:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:10.981 16:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:10.981 16:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:10.981 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:10.981 16:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:18:10.981 16:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:18:10.981 16:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=542 00:18:10.981 16:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:10.981 16:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:10.981 16:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:10.981 16:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:10.981 16:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:10.981 16:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:10.982 16:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.982 16:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.982 16:33:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.982 16:33:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.982 16:33:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.982 16:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:10.982 "name": "raid_bdev1", 00:18:10.982 "uuid": "4ab232dc-fe50-4721-8def-5182b3aa0691", 00:18:10.982 "strip_size_kb": 64, 00:18:10.982 "state": "online", 00:18:10.982 "raid_level": "raid5f", 00:18:10.982 "superblock": true, 00:18:10.982 "num_base_bdevs": 4, 00:18:10.982 "num_base_bdevs_discovered": 4, 00:18:10.982 "num_base_bdevs_operational": 4, 00:18:10.982 "process": { 00:18:10.982 "type": "rebuild", 00:18:10.982 "target": "spare", 00:18:10.982 "progress": { 00:18:10.982 "blocks": 21120, 00:18:10.982 "percent": 11 00:18:10.982 } 00:18:10.982 }, 00:18:10.982 "base_bdevs_list": [ 00:18:10.982 { 00:18:10.982 "name": "spare", 00:18:10.982 "uuid": "b0d2984a-3723-5e77-8ae9-a80728a2048d", 00:18:10.982 "is_configured": true, 00:18:10.982 "data_offset": 2048, 00:18:10.982 "data_size": 63488 00:18:10.982 }, 00:18:10.982 { 00:18:10.982 "name": "BaseBdev2", 00:18:10.982 "uuid": "28b3a80e-7999-516f-90ad-974a553369e7", 00:18:10.982 "is_configured": true, 00:18:10.982 "data_offset": 2048, 00:18:10.982 "data_size": 63488 00:18:10.982 }, 00:18:10.982 { 00:18:10.982 "name": "BaseBdev3", 00:18:10.982 "uuid": "8d29573c-6ae1-5922-a49d-c50c09a1a045", 00:18:10.982 "is_configured": true, 00:18:10.982 "data_offset": 2048, 00:18:10.982 "data_size": 63488 00:18:10.982 }, 00:18:10.982 { 00:18:10.982 "name": "BaseBdev4", 00:18:10.982 "uuid": "1138a95d-82b9-5175-807f-dfae29a0f109", 00:18:10.982 "is_configured": true, 00:18:10.982 "data_offset": 2048, 00:18:10.982 "data_size": 63488 00:18:10.982 } 00:18:10.982 ] 00:18:10.982 }' 00:18:10.982 16:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:10.982 16:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:10.982 16:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:10.982 16:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:10.982 16:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:11.918 16:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:11.918 16:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:11.918 16:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:11.918 16:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:11.918 16:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:11.918 16:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:11.918 16:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.918 16:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.918 16:33:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.918 16:33:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.177 16:33:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.178 16:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:12.178 "name": "raid_bdev1", 00:18:12.178 "uuid": "4ab232dc-fe50-4721-8def-5182b3aa0691", 00:18:12.178 "strip_size_kb": 64, 00:18:12.178 "state": "online", 00:18:12.178 "raid_level": "raid5f", 00:18:12.178 "superblock": true, 00:18:12.178 "num_base_bdevs": 4, 00:18:12.178 "num_base_bdevs_discovered": 4, 00:18:12.178 "num_base_bdevs_operational": 4, 00:18:12.178 "process": { 00:18:12.178 "type": "rebuild", 00:18:12.178 "target": "spare", 00:18:12.178 "progress": { 00:18:12.178 "blocks": 42240, 00:18:12.178 "percent": 22 00:18:12.178 } 00:18:12.178 }, 00:18:12.178 "base_bdevs_list": [ 00:18:12.178 { 00:18:12.178 "name": "spare", 00:18:12.178 "uuid": "b0d2984a-3723-5e77-8ae9-a80728a2048d", 00:18:12.178 "is_configured": true, 00:18:12.178 "data_offset": 2048, 00:18:12.178 "data_size": 63488 00:18:12.178 }, 00:18:12.178 { 00:18:12.178 "name": "BaseBdev2", 00:18:12.178 "uuid": "28b3a80e-7999-516f-90ad-974a553369e7", 00:18:12.178 "is_configured": true, 00:18:12.178 "data_offset": 2048, 00:18:12.178 "data_size": 63488 00:18:12.178 }, 00:18:12.178 { 00:18:12.178 "name": "BaseBdev3", 00:18:12.178 "uuid": "8d29573c-6ae1-5922-a49d-c50c09a1a045", 00:18:12.178 "is_configured": true, 00:18:12.178 "data_offset": 2048, 00:18:12.178 "data_size": 63488 00:18:12.178 }, 00:18:12.178 { 00:18:12.178 "name": "BaseBdev4", 00:18:12.178 "uuid": "1138a95d-82b9-5175-807f-dfae29a0f109", 00:18:12.178 "is_configured": true, 00:18:12.178 "data_offset": 2048, 00:18:12.178 "data_size": 63488 00:18:12.178 } 00:18:12.178 ] 00:18:12.178 }' 00:18:12.178 16:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:12.178 16:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:12.178 16:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:12.178 16:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:12.178 16:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:13.114 16:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:13.114 16:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:13.114 16:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:13.114 16:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:13.114 16:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:13.114 16:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:13.114 16:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.114 16:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.114 16:33:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.114 16:33:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.114 16:33:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.114 16:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:13.114 "name": "raid_bdev1", 00:18:13.114 "uuid": "4ab232dc-fe50-4721-8def-5182b3aa0691", 00:18:13.114 "strip_size_kb": 64, 00:18:13.114 "state": "online", 00:18:13.114 "raid_level": "raid5f", 00:18:13.114 "superblock": true, 00:18:13.114 "num_base_bdevs": 4, 00:18:13.114 "num_base_bdevs_discovered": 4, 00:18:13.114 "num_base_bdevs_operational": 4, 00:18:13.114 "process": { 00:18:13.114 "type": "rebuild", 00:18:13.114 "target": "spare", 00:18:13.114 "progress": { 00:18:13.114 "blocks": 63360, 00:18:13.114 "percent": 33 00:18:13.114 } 00:18:13.114 }, 00:18:13.114 "base_bdevs_list": [ 00:18:13.114 { 00:18:13.114 "name": "spare", 00:18:13.114 "uuid": "b0d2984a-3723-5e77-8ae9-a80728a2048d", 00:18:13.114 "is_configured": true, 00:18:13.114 "data_offset": 2048, 00:18:13.114 "data_size": 63488 00:18:13.114 }, 00:18:13.114 { 00:18:13.114 "name": "BaseBdev2", 00:18:13.114 "uuid": "28b3a80e-7999-516f-90ad-974a553369e7", 00:18:13.114 "is_configured": true, 00:18:13.114 "data_offset": 2048, 00:18:13.114 "data_size": 63488 00:18:13.114 }, 00:18:13.114 { 00:18:13.114 "name": "BaseBdev3", 00:18:13.114 "uuid": "8d29573c-6ae1-5922-a49d-c50c09a1a045", 00:18:13.114 "is_configured": true, 00:18:13.114 "data_offset": 2048, 00:18:13.114 "data_size": 63488 00:18:13.114 }, 00:18:13.114 { 00:18:13.114 "name": "BaseBdev4", 00:18:13.114 "uuid": "1138a95d-82b9-5175-807f-dfae29a0f109", 00:18:13.114 "is_configured": true, 00:18:13.114 "data_offset": 2048, 00:18:13.114 "data_size": 63488 00:18:13.114 } 00:18:13.114 ] 00:18:13.114 }' 00:18:13.114 16:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:13.114 16:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:13.114 16:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:13.372 16:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:13.372 16:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:14.306 16:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:14.306 16:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:14.306 16:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:14.306 16:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:14.306 16:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:14.306 16:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:14.306 16:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.306 16:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.306 16:33:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.306 16:33:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.306 16:33:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.306 16:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:14.306 "name": "raid_bdev1", 00:18:14.306 "uuid": "4ab232dc-fe50-4721-8def-5182b3aa0691", 00:18:14.306 "strip_size_kb": 64, 00:18:14.306 "state": "online", 00:18:14.306 "raid_level": "raid5f", 00:18:14.306 "superblock": true, 00:18:14.306 "num_base_bdevs": 4, 00:18:14.306 "num_base_bdevs_discovered": 4, 00:18:14.306 "num_base_bdevs_operational": 4, 00:18:14.306 "process": { 00:18:14.306 "type": "rebuild", 00:18:14.306 "target": "spare", 00:18:14.306 "progress": { 00:18:14.306 "blocks": 86400, 00:18:14.306 "percent": 45 00:18:14.306 } 00:18:14.306 }, 00:18:14.306 "base_bdevs_list": [ 00:18:14.306 { 00:18:14.306 "name": "spare", 00:18:14.306 "uuid": "b0d2984a-3723-5e77-8ae9-a80728a2048d", 00:18:14.306 "is_configured": true, 00:18:14.306 "data_offset": 2048, 00:18:14.306 "data_size": 63488 00:18:14.306 }, 00:18:14.306 { 00:18:14.306 "name": "BaseBdev2", 00:18:14.306 "uuid": "28b3a80e-7999-516f-90ad-974a553369e7", 00:18:14.306 "is_configured": true, 00:18:14.306 "data_offset": 2048, 00:18:14.306 "data_size": 63488 00:18:14.306 }, 00:18:14.306 { 00:18:14.306 "name": "BaseBdev3", 00:18:14.306 "uuid": "8d29573c-6ae1-5922-a49d-c50c09a1a045", 00:18:14.306 "is_configured": true, 00:18:14.306 "data_offset": 2048, 00:18:14.306 "data_size": 63488 00:18:14.306 }, 00:18:14.306 { 00:18:14.306 "name": "BaseBdev4", 00:18:14.306 "uuid": "1138a95d-82b9-5175-807f-dfae29a0f109", 00:18:14.306 "is_configured": true, 00:18:14.306 "data_offset": 2048, 00:18:14.306 "data_size": 63488 00:18:14.306 } 00:18:14.306 ] 00:18:14.306 }' 00:18:14.306 16:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:14.306 16:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:14.306 16:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:14.306 16:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:14.306 16:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:15.307 16:33:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:15.307 16:33:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:15.307 16:33:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:15.307 16:33:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:15.307 16:33:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:15.307 16:33:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:15.307 16:33:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.307 16:33:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.307 16:33:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.307 16:33:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.567 16:33:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.567 16:33:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:15.567 "name": "raid_bdev1", 00:18:15.567 "uuid": "4ab232dc-fe50-4721-8def-5182b3aa0691", 00:18:15.567 "strip_size_kb": 64, 00:18:15.567 "state": "online", 00:18:15.567 "raid_level": "raid5f", 00:18:15.567 "superblock": true, 00:18:15.567 "num_base_bdevs": 4, 00:18:15.567 "num_base_bdevs_discovered": 4, 00:18:15.567 "num_base_bdevs_operational": 4, 00:18:15.567 "process": { 00:18:15.567 "type": "rebuild", 00:18:15.567 "target": "spare", 00:18:15.567 "progress": { 00:18:15.567 "blocks": 107520, 00:18:15.567 "percent": 56 00:18:15.567 } 00:18:15.567 }, 00:18:15.567 "base_bdevs_list": [ 00:18:15.567 { 00:18:15.567 "name": "spare", 00:18:15.567 "uuid": "b0d2984a-3723-5e77-8ae9-a80728a2048d", 00:18:15.567 "is_configured": true, 00:18:15.567 "data_offset": 2048, 00:18:15.567 "data_size": 63488 00:18:15.567 }, 00:18:15.567 { 00:18:15.567 "name": "BaseBdev2", 00:18:15.567 "uuid": "28b3a80e-7999-516f-90ad-974a553369e7", 00:18:15.567 "is_configured": true, 00:18:15.567 "data_offset": 2048, 00:18:15.567 "data_size": 63488 00:18:15.567 }, 00:18:15.567 { 00:18:15.567 "name": "BaseBdev3", 00:18:15.567 "uuid": "8d29573c-6ae1-5922-a49d-c50c09a1a045", 00:18:15.567 "is_configured": true, 00:18:15.567 "data_offset": 2048, 00:18:15.567 "data_size": 63488 00:18:15.567 }, 00:18:15.567 { 00:18:15.567 "name": "BaseBdev4", 00:18:15.567 "uuid": "1138a95d-82b9-5175-807f-dfae29a0f109", 00:18:15.567 "is_configured": true, 00:18:15.567 "data_offset": 2048, 00:18:15.567 "data_size": 63488 00:18:15.567 } 00:18:15.567 ] 00:18:15.567 }' 00:18:15.567 16:33:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:15.567 16:33:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:15.567 16:33:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:15.567 16:33:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:15.567 16:33:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:16.506 16:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:16.506 16:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:16.506 16:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:16.506 16:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:16.506 16:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:16.506 16:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:16.506 16:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.506 16:33:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.506 16:33:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.506 16:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.506 16:33:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.506 16:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:16.506 "name": "raid_bdev1", 00:18:16.506 "uuid": "4ab232dc-fe50-4721-8def-5182b3aa0691", 00:18:16.506 "strip_size_kb": 64, 00:18:16.506 "state": "online", 00:18:16.506 "raid_level": "raid5f", 00:18:16.506 "superblock": true, 00:18:16.506 "num_base_bdevs": 4, 00:18:16.506 "num_base_bdevs_discovered": 4, 00:18:16.506 "num_base_bdevs_operational": 4, 00:18:16.506 "process": { 00:18:16.506 "type": "rebuild", 00:18:16.506 "target": "spare", 00:18:16.506 "progress": { 00:18:16.506 "blocks": 128640, 00:18:16.506 "percent": 67 00:18:16.506 } 00:18:16.506 }, 00:18:16.506 "base_bdevs_list": [ 00:18:16.506 { 00:18:16.506 "name": "spare", 00:18:16.506 "uuid": "b0d2984a-3723-5e77-8ae9-a80728a2048d", 00:18:16.506 "is_configured": true, 00:18:16.506 "data_offset": 2048, 00:18:16.506 "data_size": 63488 00:18:16.506 }, 00:18:16.506 { 00:18:16.506 "name": "BaseBdev2", 00:18:16.506 "uuid": "28b3a80e-7999-516f-90ad-974a553369e7", 00:18:16.506 "is_configured": true, 00:18:16.506 "data_offset": 2048, 00:18:16.506 "data_size": 63488 00:18:16.506 }, 00:18:16.506 { 00:18:16.506 "name": "BaseBdev3", 00:18:16.506 "uuid": "8d29573c-6ae1-5922-a49d-c50c09a1a045", 00:18:16.506 "is_configured": true, 00:18:16.506 "data_offset": 2048, 00:18:16.506 "data_size": 63488 00:18:16.506 }, 00:18:16.506 { 00:18:16.506 "name": "BaseBdev4", 00:18:16.506 "uuid": "1138a95d-82b9-5175-807f-dfae29a0f109", 00:18:16.506 "is_configured": true, 00:18:16.506 "data_offset": 2048, 00:18:16.506 "data_size": 63488 00:18:16.506 } 00:18:16.506 ] 00:18:16.506 }' 00:18:16.506 16:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:16.506 16:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:16.506 16:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:16.765 16:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:16.765 16:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:17.703 16:33:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:17.703 16:33:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:17.703 16:33:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:17.703 16:33:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:17.703 16:33:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:17.703 16:33:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:17.703 16:33:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.703 16:33:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.703 16:33:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.703 16:33:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.703 16:33:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.703 16:33:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:17.703 "name": "raid_bdev1", 00:18:17.703 "uuid": "4ab232dc-fe50-4721-8def-5182b3aa0691", 00:18:17.703 "strip_size_kb": 64, 00:18:17.703 "state": "online", 00:18:17.703 "raid_level": "raid5f", 00:18:17.703 "superblock": true, 00:18:17.703 "num_base_bdevs": 4, 00:18:17.703 "num_base_bdevs_discovered": 4, 00:18:17.703 "num_base_bdevs_operational": 4, 00:18:17.703 "process": { 00:18:17.704 "type": "rebuild", 00:18:17.704 "target": "spare", 00:18:17.704 "progress": { 00:18:17.704 "blocks": 149760, 00:18:17.704 "percent": 78 00:18:17.704 } 00:18:17.704 }, 00:18:17.704 "base_bdevs_list": [ 00:18:17.704 { 00:18:17.704 "name": "spare", 00:18:17.704 "uuid": "b0d2984a-3723-5e77-8ae9-a80728a2048d", 00:18:17.704 "is_configured": true, 00:18:17.704 "data_offset": 2048, 00:18:17.704 "data_size": 63488 00:18:17.704 }, 00:18:17.704 { 00:18:17.704 "name": "BaseBdev2", 00:18:17.704 "uuid": "28b3a80e-7999-516f-90ad-974a553369e7", 00:18:17.704 "is_configured": true, 00:18:17.704 "data_offset": 2048, 00:18:17.704 "data_size": 63488 00:18:17.704 }, 00:18:17.704 { 00:18:17.704 "name": "BaseBdev3", 00:18:17.704 "uuid": "8d29573c-6ae1-5922-a49d-c50c09a1a045", 00:18:17.704 "is_configured": true, 00:18:17.704 "data_offset": 2048, 00:18:17.704 "data_size": 63488 00:18:17.704 }, 00:18:17.704 { 00:18:17.704 "name": "BaseBdev4", 00:18:17.704 "uuid": "1138a95d-82b9-5175-807f-dfae29a0f109", 00:18:17.704 "is_configured": true, 00:18:17.704 "data_offset": 2048, 00:18:17.704 "data_size": 63488 00:18:17.704 } 00:18:17.704 ] 00:18:17.704 }' 00:18:17.704 16:33:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:17.704 16:33:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:17.704 16:33:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:17.704 16:33:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:17.704 16:33:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:19.082 16:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:19.082 16:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:19.082 16:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:19.082 16:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:19.082 16:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:19.082 16:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:19.082 16:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.082 16:34:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.082 16:34:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.082 16:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.082 16:34:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.082 16:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:19.082 "name": "raid_bdev1", 00:18:19.082 "uuid": "4ab232dc-fe50-4721-8def-5182b3aa0691", 00:18:19.082 "strip_size_kb": 64, 00:18:19.082 "state": "online", 00:18:19.082 "raid_level": "raid5f", 00:18:19.082 "superblock": true, 00:18:19.082 "num_base_bdevs": 4, 00:18:19.082 "num_base_bdevs_discovered": 4, 00:18:19.082 "num_base_bdevs_operational": 4, 00:18:19.082 "process": { 00:18:19.082 "type": "rebuild", 00:18:19.082 "target": "spare", 00:18:19.082 "progress": { 00:18:19.082 "blocks": 172800, 00:18:19.082 "percent": 90 00:18:19.082 } 00:18:19.082 }, 00:18:19.082 "base_bdevs_list": [ 00:18:19.082 { 00:18:19.082 "name": "spare", 00:18:19.082 "uuid": "b0d2984a-3723-5e77-8ae9-a80728a2048d", 00:18:19.082 "is_configured": true, 00:18:19.082 "data_offset": 2048, 00:18:19.082 "data_size": 63488 00:18:19.082 }, 00:18:19.082 { 00:18:19.082 "name": "BaseBdev2", 00:18:19.082 "uuid": "28b3a80e-7999-516f-90ad-974a553369e7", 00:18:19.082 "is_configured": true, 00:18:19.082 "data_offset": 2048, 00:18:19.082 "data_size": 63488 00:18:19.082 }, 00:18:19.082 { 00:18:19.082 "name": "BaseBdev3", 00:18:19.082 "uuid": "8d29573c-6ae1-5922-a49d-c50c09a1a045", 00:18:19.082 "is_configured": true, 00:18:19.082 "data_offset": 2048, 00:18:19.082 "data_size": 63488 00:18:19.082 }, 00:18:19.082 { 00:18:19.082 "name": "BaseBdev4", 00:18:19.082 "uuid": "1138a95d-82b9-5175-807f-dfae29a0f109", 00:18:19.082 "is_configured": true, 00:18:19.082 "data_offset": 2048, 00:18:19.082 "data_size": 63488 00:18:19.082 } 00:18:19.082 ] 00:18:19.082 }' 00:18:19.082 16:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:19.082 16:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:19.082 16:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:19.082 16:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:19.082 16:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:20.017 [2024-12-06 16:34:01.493297] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:20.017 [2024-12-06 16:34:01.493508] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:20.017 [2024-12-06 16:34:01.493694] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:20.017 16:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:20.017 16:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:20.017 16:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:20.017 16:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:20.017 16:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:20.017 16:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:20.017 16:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.017 16:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.017 16:34:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.017 16:34:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.017 16:34:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.017 16:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:20.017 "name": "raid_bdev1", 00:18:20.017 "uuid": "4ab232dc-fe50-4721-8def-5182b3aa0691", 00:18:20.017 "strip_size_kb": 64, 00:18:20.017 "state": "online", 00:18:20.017 "raid_level": "raid5f", 00:18:20.017 "superblock": true, 00:18:20.017 "num_base_bdevs": 4, 00:18:20.017 "num_base_bdevs_discovered": 4, 00:18:20.017 "num_base_bdevs_operational": 4, 00:18:20.017 "base_bdevs_list": [ 00:18:20.017 { 00:18:20.017 "name": "spare", 00:18:20.017 "uuid": "b0d2984a-3723-5e77-8ae9-a80728a2048d", 00:18:20.017 "is_configured": true, 00:18:20.017 "data_offset": 2048, 00:18:20.017 "data_size": 63488 00:18:20.017 }, 00:18:20.017 { 00:18:20.017 "name": "BaseBdev2", 00:18:20.017 "uuid": "28b3a80e-7999-516f-90ad-974a553369e7", 00:18:20.017 "is_configured": true, 00:18:20.017 "data_offset": 2048, 00:18:20.017 "data_size": 63488 00:18:20.017 }, 00:18:20.017 { 00:18:20.017 "name": "BaseBdev3", 00:18:20.017 "uuid": "8d29573c-6ae1-5922-a49d-c50c09a1a045", 00:18:20.017 "is_configured": true, 00:18:20.017 "data_offset": 2048, 00:18:20.017 "data_size": 63488 00:18:20.017 }, 00:18:20.017 { 00:18:20.017 "name": "BaseBdev4", 00:18:20.017 "uuid": "1138a95d-82b9-5175-807f-dfae29a0f109", 00:18:20.017 "is_configured": true, 00:18:20.017 "data_offset": 2048, 00:18:20.017 "data_size": 63488 00:18:20.017 } 00:18:20.017 ] 00:18:20.017 }' 00:18:20.017 16:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:20.017 16:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:20.017 16:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:20.017 16:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:20.017 16:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:18:20.017 16:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:20.017 16:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:20.017 16:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:20.017 16:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:20.017 16:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:20.017 16:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.017 16:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.017 16:34:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.017 16:34:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.017 16:34:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.017 16:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:20.017 "name": "raid_bdev1", 00:18:20.017 "uuid": "4ab232dc-fe50-4721-8def-5182b3aa0691", 00:18:20.017 "strip_size_kb": 64, 00:18:20.017 "state": "online", 00:18:20.017 "raid_level": "raid5f", 00:18:20.017 "superblock": true, 00:18:20.017 "num_base_bdevs": 4, 00:18:20.017 "num_base_bdevs_discovered": 4, 00:18:20.017 "num_base_bdevs_operational": 4, 00:18:20.017 "base_bdevs_list": [ 00:18:20.017 { 00:18:20.017 "name": "spare", 00:18:20.017 "uuid": "b0d2984a-3723-5e77-8ae9-a80728a2048d", 00:18:20.017 "is_configured": true, 00:18:20.017 "data_offset": 2048, 00:18:20.017 "data_size": 63488 00:18:20.017 }, 00:18:20.017 { 00:18:20.017 "name": "BaseBdev2", 00:18:20.017 "uuid": "28b3a80e-7999-516f-90ad-974a553369e7", 00:18:20.017 "is_configured": true, 00:18:20.018 "data_offset": 2048, 00:18:20.018 "data_size": 63488 00:18:20.018 }, 00:18:20.018 { 00:18:20.018 "name": "BaseBdev3", 00:18:20.018 "uuid": "8d29573c-6ae1-5922-a49d-c50c09a1a045", 00:18:20.018 "is_configured": true, 00:18:20.018 "data_offset": 2048, 00:18:20.018 "data_size": 63488 00:18:20.018 }, 00:18:20.018 { 00:18:20.018 "name": "BaseBdev4", 00:18:20.018 "uuid": "1138a95d-82b9-5175-807f-dfae29a0f109", 00:18:20.018 "is_configured": true, 00:18:20.018 "data_offset": 2048, 00:18:20.018 "data_size": 63488 00:18:20.018 } 00:18:20.018 ] 00:18:20.018 }' 00:18:20.018 16:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:20.276 16:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:20.276 16:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:20.276 16:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:20.276 16:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:20.276 16:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:20.276 16:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:20.276 16:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:20.276 16:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:20.276 16:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:20.276 16:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.276 16:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.276 16:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.276 16:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.276 16:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.276 16:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.276 16:34:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.276 16:34:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.276 16:34:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.276 16:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.276 "name": "raid_bdev1", 00:18:20.276 "uuid": "4ab232dc-fe50-4721-8def-5182b3aa0691", 00:18:20.276 "strip_size_kb": 64, 00:18:20.276 "state": "online", 00:18:20.276 "raid_level": "raid5f", 00:18:20.276 "superblock": true, 00:18:20.276 "num_base_bdevs": 4, 00:18:20.276 "num_base_bdevs_discovered": 4, 00:18:20.276 "num_base_bdevs_operational": 4, 00:18:20.276 "base_bdevs_list": [ 00:18:20.276 { 00:18:20.276 "name": "spare", 00:18:20.276 "uuid": "b0d2984a-3723-5e77-8ae9-a80728a2048d", 00:18:20.276 "is_configured": true, 00:18:20.276 "data_offset": 2048, 00:18:20.276 "data_size": 63488 00:18:20.276 }, 00:18:20.276 { 00:18:20.276 "name": "BaseBdev2", 00:18:20.276 "uuid": "28b3a80e-7999-516f-90ad-974a553369e7", 00:18:20.276 "is_configured": true, 00:18:20.276 "data_offset": 2048, 00:18:20.276 "data_size": 63488 00:18:20.276 }, 00:18:20.276 { 00:18:20.276 "name": "BaseBdev3", 00:18:20.276 "uuid": "8d29573c-6ae1-5922-a49d-c50c09a1a045", 00:18:20.276 "is_configured": true, 00:18:20.276 "data_offset": 2048, 00:18:20.276 "data_size": 63488 00:18:20.276 }, 00:18:20.276 { 00:18:20.276 "name": "BaseBdev4", 00:18:20.276 "uuid": "1138a95d-82b9-5175-807f-dfae29a0f109", 00:18:20.276 "is_configured": true, 00:18:20.276 "data_offset": 2048, 00:18:20.276 "data_size": 63488 00:18:20.276 } 00:18:20.276 ] 00:18:20.276 }' 00:18:20.276 16:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.276 16:34:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.843 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:20.843 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.843 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.843 [2024-12-06 16:34:02.401994] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:20.843 [2024-12-06 16:34:02.402091] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:20.843 [2024-12-06 16:34:02.402254] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:20.843 [2024-12-06 16:34:02.402412] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:20.843 [2024-12-06 16:34:02.402468] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:18:20.843 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.843 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:18:20.843 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.843 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.843 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.843 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.843 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:20.843 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:20.843 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:20.843 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:20.843 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:20.843 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:20.843 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:20.843 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:20.843 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:20.843 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:20.843 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:20.843 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:20.843 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:20.843 /dev/nbd0 00:18:21.102 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:21.102 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:21.102 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:21.102 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:21.102 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:21.102 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:21.102 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:21.102 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:21.102 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:21.102 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:21.102 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:21.102 1+0 records in 00:18:21.102 1+0 records out 00:18:21.102 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000600756 s, 6.8 MB/s 00:18:21.102 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:21.102 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:21.102 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:21.102 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:21.102 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:21.102 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:21.102 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:21.102 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:21.102 /dev/nbd1 00:18:21.102 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:21.102 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:21.102 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:21.102 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:21.102 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:21.102 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:21.102 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:21.361 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:21.361 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:21.361 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:21.361 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:21.361 1+0 records in 00:18:21.361 1+0 records out 00:18:21.361 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000658028 s, 6.2 MB/s 00:18:21.361 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:21.361 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:21.361 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:21.361 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:21.361 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:21.361 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:21.361 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:21.361 16:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:21.361 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:21.361 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:21.361 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:21.361 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:21.361 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:21.361 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:21.361 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:21.619 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:21.619 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:21.619 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:21.619 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:21.619 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:21.619 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:21.619 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:21.619 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:21.619 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:21.619 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:21.877 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:21.877 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:21.877 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:21.877 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:21.877 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:21.877 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:21.877 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:21.877 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:21.877 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:21.877 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:21.877 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.877 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.877 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.877 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:21.877 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.877 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.877 [2024-12-06 16:34:03.525938] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:21.877 [2024-12-06 16:34:03.526036] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:21.877 [2024-12-06 16:34:03.526060] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:18:21.877 [2024-12-06 16:34:03.526070] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:21.877 [2024-12-06 16:34:03.528405] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:21.877 [2024-12-06 16:34:03.528448] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:21.877 [2024-12-06 16:34:03.528532] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:21.877 [2024-12-06 16:34:03.528588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:21.877 [2024-12-06 16:34:03.528719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:21.877 [2024-12-06 16:34:03.528823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:21.877 [2024-12-06 16:34:03.528890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:21.877 spare 00:18:21.877 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.877 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:21.877 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.877 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.877 [2024-12-06 16:34:03.628815] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:18:21.877 [2024-12-06 16:34:03.628862] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:21.877 [2024-12-06 16:34:03.629191] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049030 00:18:21.877 [2024-12-06 16:34:03.629756] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:18:21.877 [2024-12-06 16:34:03.629784] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:18:21.877 [2024-12-06 16:34:03.629965] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:21.877 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.877 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:21.877 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:21.877 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:21.877 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:21.877 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:21.877 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:21.877 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.877 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.877 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.877 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.877 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.877 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.877 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.877 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.877 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.877 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.877 "name": "raid_bdev1", 00:18:21.877 "uuid": "4ab232dc-fe50-4721-8def-5182b3aa0691", 00:18:21.877 "strip_size_kb": 64, 00:18:21.877 "state": "online", 00:18:21.877 "raid_level": "raid5f", 00:18:21.877 "superblock": true, 00:18:21.877 "num_base_bdevs": 4, 00:18:21.877 "num_base_bdevs_discovered": 4, 00:18:21.877 "num_base_bdevs_operational": 4, 00:18:21.877 "base_bdevs_list": [ 00:18:21.877 { 00:18:21.877 "name": "spare", 00:18:21.877 "uuid": "b0d2984a-3723-5e77-8ae9-a80728a2048d", 00:18:21.877 "is_configured": true, 00:18:21.877 "data_offset": 2048, 00:18:21.877 "data_size": 63488 00:18:21.877 }, 00:18:21.877 { 00:18:21.877 "name": "BaseBdev2", 00:18:21.877 "uuid": "28b3a80e-7999-516f-90ad-974a553369e7", 00:18:21.877 "is_configured": true, 00:18:21.877 "data_offset": 2048, 00:18:21.877 "data_size": 63488 00:18:21.877 }, 00:18:21.877 { 00:18:21.877 "name": "BaseBdev3", 00:18:21.877 "uuid": "8d29573c-6ae1-5922-a49d-c50c09a1a045", 00:18:21.877 "is_configured": true, 00:18:21.877 "data_offset": 2048, 00:18:21.877 "data_size": 63488 00:18:21.877 }, 00:18:21.877 { 00:18:21.877 "name": "BaseBdev4", 00:18:21.877 "uuid": "1138a95d-82b9-5175-807f-dfae29a0f109", 00:18:21.877 "is_configured": true, 00:18:21.877 "data_offset": 2048, 00:18:21.877 "data_size": 63488 00:18:21.877 } 00:18:21.877 ] 00:18:21.877 }' 00:18:21.877 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.877 16:34:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.444 16:34:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:22.444 16:34:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:22.444 16:34:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:22.444 16:34:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:22.444 16:34:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:22.444 16:34:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.444 16:34:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.444 16:34:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.444 16:34:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.444 16:34:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.444 16:34:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:22.444 "name": "raid_bdev1", 00:18:22.444 "uuid": "4ab232dc-fe50-4721-8def-5182b3aa0691", 00:18:22.444 "strip_size_kb": 64, 00:18:22.444 "state": "online", 00:18:22.444 "raid_level": "raid5f", 00:18:22.444 "superblock": true, 00:18:22.444 "num_base_bdevs": 4, 00:18:22.444 "num_base_bdevs_discovered": 4, 00:18:22.444 "num_base_bdevs_operational": 4, 00:18:22.444 "base_bdevs_list": [ 00:18:22.444 { 00:18:22.444 "name": "spare", 00:18:22.444 "uuid": "b0d2984a-3723-5e77-8ae9-a80728a2048d", 00:18:22.444 "is_configured": true, 00:18:22.444 "data_offset": 2048, 00:18:22.444 "data_size": 63488 00:18:22.444 }, 00:18:22.444 { 00:18:22.444 "name": "BaseBdev2", 00:18:22.444 "uuid": "28b3a80e-7999-516f-90ad-974a553369e7", 00:18:22.444 "is_configured": true, 00:18:22.444 "data_offset": 2048, 00:18:22.444 "data_size": 63488 00:18:22.444 }, 00:18:22.444 { 00:18:22.444 "name": "BaseBdev3", 00:18:22.444 "uuid": "8d29573c-6ae1-5922-a49d-c50c09a1a045", 00:18:22.444 "is_configured": true, 00:18:22.444 "data_offset": 2048, 00:18:22.444 "data_size": 63488 00:18:22.444 }, 00:18:22.444 { 00:18:22.444 "name": "BaseBdev4", 00:18:22.444 "uuid": "1138a95d-82b9-5175-807f-dfae29a0f109", 00:18:22.444 "is_configured": true, 00:18:22.444 "data_offset": 2048, 00:18:22.444 "data_size": 63488 00:18:22.444 } 00:18:22.444 ] 00:18:22.444 }' 00:18:22.444 16:34:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:22.444 16:34:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:22.444 16:34:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:22.444 16:34:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:22.444 16:34:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.444 16:34:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.444 16:34:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.444 16:34:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:22.444 16:34:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.444 16:34:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:22.444 16:34:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:22.444 16:34:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.444 16:34:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.444 [2024-12-06 16:34:04.228955] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:22.444 16:34:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.444 16:34:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:22.444 16:34:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:22.444 16:34:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:22.444 16:34:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:22.444 16:34:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:22.444 16:34:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:22.444 16:34:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.444 16:34:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.444 16:34:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.444 16:34:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.444 16:34:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.444 16:34:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.444 16:34:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.444 16:34:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.444 16:34:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.444 16:34:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.444 "name": "raid_bdev1", 00:18:22.444 "uuid": "4ab232dc-fe50-4721-8def-5182b3aa0691", 00:18:22.444 "strip_size_kb": 64, 00:18:22.445 "state": "online", 00:18:22.445 "raid_level": "raid5f", 00:18:22.445 "superblock": true, 00:18:22.445 "num_base_bdevs": 4, 00:18:22.445 "num_base_bdevs_discovered": 3, 00:18:22.445 "num_base_bdevs_operational": 3, 00:18:22.445 "base_bdevs_list": [ 00:18:22.445 { 00:18:22.445 "name": null, 00:18:22.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.445 "is_configured": false, 00:18:22.445 "data_offset": 0, 00:18:22.445 "data_size": 63488 00:18:22.445 }, 00:18:22.445 { 00:18:22.445 "name": "BaseBdev2", 00:18:22.445 "uuid": "28b3a80e-7999-516f-90ad-974a553369e7", 00:18:22.445 "is_configured": true, 00:18:22.445 "data_offset": 2048, 00:18:22.445 "data_size": 63488 00:18:22.445 }, 00:18:22.445 { 00:18:22.445 "name": "BaseBdev3", 00:18:22.445 "uuid": "8d29573c-6ae1-5922-a49d-c50c09a1a045", 00:18:22.445 "is_configured": true, 00:18:22.445 "data_offset": 2048, 00:18:22.445 "data_size": 63488 00:18:22.445 }, 00:18:22.445 { 00:18:22.445 "name": "BaseBdev4", 00:18:22.445 "uuid": "1138a95d-82b9-5175-807f-dfae29a0f109", 00:18:22.445 "is_configured": true, 00:18:22.445 "data_offset": 2048, 00:18:22.445 "data_size": 63488 00:18:22.445 } 00:18:22.445 ] 00:18:22.445 }' 00:18:22.445 16:34:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.445 16:34:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.009 16:34:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:23.009 16:34:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.009 16:34:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.009 [2024-12-06 16:34:04.664268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:23.009 [2024-12-06 16:34:04.664547] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:23.009 [2024-12-06 16:34:04.664619] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:23.009 [2024-12-06 16:34:04.664722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:23.009 [2024-12-06 16:34:04.668910] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049100 00:18:23.009 16:34:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.009 16:34:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:23.009 [2024-12-06 16:34:04.671475] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:23.943 16:34:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:23.943 16:34:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:23.943 16:34:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:23.943 16:34:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:23.943 16:34:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:23.943 16:34:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.943 16:34:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.943 16:34:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.943 16:34:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.943 16:34:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.943 16:34:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:23.943 "name": "raid_bdev1", 00:18:23.943 "uuid": "4ab232dc-fe50-4721-8def-5182b3aa0691", 00:18:23.943 "strip_size_kb": 64, 00:18:23.943 "state": "online", 00:18:23.943 "raid_level": "raid5f", 00:18:23.943 "superblock": true, 00:18:23.943 "num_base_bdevs": 4, 00:18:23.943 "num_base_bdevs_discovered": 4, 00:18:23.943 "num_base_bdevs_operational": 4, 00:18:23.943 "process": { 00:18:23.943 "type": "rebuild", 00:18:23.943 "target": "spare", 00:18:23.943 "progress": { 00:18:23.943 "blocks": 19200, 00:18:23.943 "percent": 10 00:18:23.943 } 00:18:23.943 }, 00:18:23.943 "base_bdevs_list": [ 00:18:23.943 { 00:18:23.943 "name": "spare", 00:18:23.943 "uuid": "b0d2984a-3723-5e77-8ae9-a80728a2048d", 00:18:23.943 "is_configured": true, 00:18:23.943 "data_offset": 2048, 00:18:23.944 "data_size": 63488 00:18:23.944 }, 00:18:23.944 { 00:18:23.944 "name": "BaseBdev2", 00:18:23.944 "uuid": "28b3a80e-7999-516f-90ad-974a553369e7", 00:18:23.944 "is_configured": true, 00:18:23.944 "data_offset": 2048, 00:18:23.944 "data_size": 63488 00:18:23.944 }, 00:18:23.944 { 00:18:23.944 "name": "BaseBdev3", 00:18:23.944 "uuid": "8d29573c-6ae1-5922-a49d-c50c09a1a045", 00:18:23.944 "is_configured": true, 00:18:23.944 "data_offset": 2048, 00:18:23.944 "data_size": 63488 00:18:23.944 }, 00:18:23.944 { 00:18:23.944 "name": "BaseBdev4", 00:18:23.944 "uuid": "1138a95d-82b9-5175-807f-dfae29a0f109", 00:18:23.944 "is_configured": true, 00:18:23.944 "data_offset": 2048, 00:18:23.944 "data_size": 63488 00:18:23.944 } 00:18:23.944 ] 00:18:23.944 }' 00:18:23.944 16:34:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:23.944 16:34:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:24.208 16:34:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:24.208 16:34:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:24.208 16:34:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:24.208 16:34:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.208 16:34:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.208 [2024-12-06 16:34:05.823308] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:24.208 [2024-12-06 16:34:05.879320] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:24.208 [2024-12-06 16:34:05.879425] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:24.208 [2024-12-06 16:34:05.879488] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:24.208 [2024-12-06 16:34:05.879515] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:24.208 16:34:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.208 16:34:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:24.208 16:34:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:24.208 16:34:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:24.208 16:34:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:24.208 16:34:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:24.208 16:34:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:24.208 16:34:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:24.208 16:34:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:24.208 16:34:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:24.208 16:34:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:24.208 16:34:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.208 16:34:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.208 16:34:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.208 16:34:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.208 16:34:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.208 16:34:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:24.208 "name": "raid_bdev1", 00:18:24.208 "uuid": "4ab232dc-fe50-4721-8def-5182b3aa0691", 00:18:24.208 "strip_size_kb": 64, 00:18:24.208 "state": "online", 00:18:24.208 "raid_level": "raid5f", 00:18:24.208 "superblock": true, 00:18:24.208 "num_base_bdevs": 4, 00:18:24.208 "num_base_bdevs_discovered": 3, 00:18:24.208 "num_base_bdevs_operational": 3, 00:18:24.208 "base_bdevs_list": [ 00:18:24.208 { 00:18:24.208 "name": null, 00:18:24.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.208 "is_configured": false, 00:18:24.208 "data_offset": 0, 00:18:24.208 "data_size": 63488 00:18:24.208 }, 00:18:24.208 { 00:18:24.208 "name": "BaseBdev2", 00:18:24.208 "uuid": "28b3a80e-7999-516f-90ad-974a553369e7", 00:18:24.208 "is_configured": true, 00:18:24.208 "data_offset": 2048, 00:18:24.208 "data_size": 63488 00:18:24.208 }, 00:18:24.208 { 00:18:24.208 "name": "BaseBdev3", 00:18:24.208 "uuid": "8d29573c-6ae1-5922-a49d-c50c09a1a045", 00:18:24.208 "is_configured": true, 00:18:24.208 "data_offset": 2048, 00:18:24.208 "data_size": 63488 00:18:24.208 }, 00:18:24.208 { 00:18:24.208 "name": "BaseBdev4", 00:18:24.208 "uuid": "1138a95d-82b9-5175-807f-dfae29a0f109", 00:18:24.208 "is_configured": true, 00:18:24.208 "data_offset": 2048, 00:18:24.208 "data_size": 63488 00:18:24.208 } 00:18:24.208 ] 00:18:24.208 }' 00:18:24.208 16:34:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:24.208 16:34:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.783 16:34:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:24.783 16:34:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.783 16:34:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.783 [2024-12-06 16:34:06.332186] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:24.783 [2024-12-06 16:34:06.332328] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:24.783 [2024-12-06 16:34:06.332391] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:18:24.783 [2024-12-06 16:34:06.332425] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:24.783 [2024-12-06 16:34:06.332999] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:24.783 [2024-12-06 16:34:06.333067] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:24.783 [2024-12-06 16:34:06.333260] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:24.783 [2024-12-06 16:34:06.333307] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:24.783 [2024-12-06 16:34:06.333380] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:24.783 [2024-12-06 16:34:06.333456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:24.783 [2024-12-06 16:34:06.337783] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:18:24.783 spare 00:18:24.783 16:34:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.783 16:34:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:24.783 [2024-12-06 16:34:06.340423] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:25.745 16:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:25.745 16:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:25.745 16:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:25.745 16:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:25.745 16:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:25.745 16:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.745 16:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.745 16:34:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.745 16:34:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.745 16:34:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.745 16:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:25.745 "name": "raid_bdev1", 00:18:25.745 "uuid": "4ab232dc-fe50-4721-8def-5182b3aa0691", 00:18:25.745 "strip_size_kb": 64, 00:18:25.745 "state": "online", 00:18:25.745 "raid_level": "raid5f", 00:18:25.745 "superblock": true, 00:18:25.745 "num_base_bdevs": 4, 00:18:25.745 "num_base_bdevs_discovered": 4, 00:18:25.745 "num_base_bdevs_operational": 4, 00:18:25.745 "process": { 00:18:25.745 "type": "rebuild", 00:18:25.745 "target": "spare", 00:18:25.745 "progress": { 00:18:25.745 "blocks": 19200, 00:18:25.745 "percent": 10 00:18:25.745 } 00:18:25.745 }, 00:18:25.745 "base_bdevs_list": [ 00:18:25.745 { 00:18:25.745 "name": "spare", 00:18:25.745 "uuid": "b0d2984a-3723-5e77-8ae9-a80728a2048d", 00:18:25.745 "is_configured": true, 00:18:25.745 "data_offset": 2048, 00:18:25.745 "data_size": 63488 00:18:25.745 }, 00:18:25.745 { 00:18:25.745 "name": "BaseBdev2", 00:18:25.745 "uuid": "28b3a80e-7999-516f-90ad-974a553369e7", 00:18:25.745 "is_configured": true, 00:18:25.745 "data_offset": 2048, 00:18:25.745 "data_size": 63488 00:18:25.745 }, 00:18:25.745 { 00:18:25.745 "name": "BaseBdev3", 00:18:25.745 "uuid": "8d29573c-6ae1-5922-a49d-c50c09a1a045", 00:18:25.745 "is_configured": true, 00:18:25.745 "data_offset": 2048, 00:18:25.745 "data_size": 63488 00:18:25.745 }, 00:18:25.745 { 00:18:25.745 "name": "BaseBdev4", 00:18:25.745 "uuid": "1138a95d-82b9-5175-807f-dfae29a0f109", 00:18:25.745 "is_configured": true, 00:18:25.745 "data_offset": 2048, 00:18:25.745 "data_size": 63488 00:18:25.745 } 00:18:25.745 ] 00:18:25.745 }' 00:18:25.745 16:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:25.745 16:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:25.745 16:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:25.745 16:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:25.745 16:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:25.745 16:34:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.745 16:34:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.745 [2024-12-06 16:34:07.476120] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:25.745 [2024-12-06 16:34:07.547391] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:25.745 [2024-12-06 16:34:07.547510] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:25.745 [2024-12-06 16:34:07.547530] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:25.745 [2024-12-06 16:34:07.547541] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:25.745 16:34:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.745 16:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:25.745 16:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:25.745 16:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:25.745 16:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:25.745 16:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:25.745 16:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:25.746 16:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.746 16:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.746 16:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.746 16:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.746 16:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.746 16:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.746 16:34:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.746 16:34:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.746 16:34:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.004 16:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.004 "name": "raid_bdev1", 00:18:26.004 "uuid": "4ab232dc-fe50-4721-8def-5182b3aa0691", 00:18:26.004 "strip_size_kb": 64, 00:18:26.004 "state": "online", 00:18:26.004 "raid_level": "raid5f", 00:18:26.004 "superblock": true, 00:18:26.004 "num_base_bdevs": 4, 00:18:26.004 "num_base_bdevs_discovered": 3, 00:18:26.004 "num_base_bdevs_operational": 3, 00:18:26.004 "base_bdevs_list": [ 00:18:26.004 { 00:18:26.004 "name": null, 00:18:26.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.004 "is_configured": false, 00:18:26.004 "data_offset": 0, 00:18:26.004 "data_size": 63488 00:18:26.004 }, 00:18:26.004 { 00:18:26.004 "name": "BaseBdev2", 00:18:26.004 "uuid": "28b3a80e-7999-516f-90ad-974a553369e7", 00:18:26.004 "is_configured": true, 00:18:26.004 "data_offset": 2048, 00:18:26.004 "data_size": 63488 00:18:26.004 }, 00:18:26.004 { 00:18:26.004 "name": "BaseBdev3", 00:18:26.004 "uuid": "8d29573c-6ae1-5922-a49d-c50c09a1a045", 00:18:26.004 "is_configured": true, 00:18:26.004 "data_offset": 2048, 00:18:26.004 "data_size": 63488 00:18:26.004 }, 00:18:26.004 { 00:18:26.005 "name": "BaseBdev4", 00:18:26.005 "uuid": "1138a95d-82b9-5175-807f-dfae29a0f109", 00:18:26.005 "is_configured": true, 00:18:26.005 "data_offset": 2048, 00:18:26.005 "data_size": 63488 00:18:26.005 } 00:18:26.005 ] 00:18:26.005 }' 00:18:26.005 16:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.005 16:34:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.277 16:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:26.277 16:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:26.277 16:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:26.277 16:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:26.277 16:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:26.277 16:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.277 16:34:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.277 16:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.277 16:34:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.277 16:34:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.277 16:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:26.277 "name": "raid_bdev1", 00:18:26.277 "uuid": "4ab232dc-fe50-4721-8def-5182b3aa0691", 00:18:26.277 "strip_size_kb": 64, 00:18:26.277 "state": "online", 00:18:26.278 "raid_level": "raid5f", 00:18:26.278 "superblock": true, 00:18:26.278 "num_base_bdevs": 4, 00:18:26.278 "num_base_bdevs_discovered": 3, 00:18:26.278 "num_base_bdevs_operational": 3, 00:18:26.278 "base_bdevs_list": [ 00:18:26.278 { 00:18:26.278 "name": null, 00:18:26.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.278 "is_configured": false, 00:18:26.278 "data_offset": 0, 00:18:26.278 "data_size": 63488 00:18:26.278 }, 00:18:26.278 { 00:18:26.278 "name": "BaseBdev2", 00:18:26.278 "uuid": "28b3a80e-7999-516f-90ad-974a553369e7", 00:18:26.278 "is_configured": true, 00:18:26.278 "data_offset": 2048, 00:18:26.278 "data_size": 63488 00:18:26.278 }, 00:18:26.278 { 00:18:26.278 "name": "BaseBdev3", 00:18:26.278 "uuid": "8d29573c-6ae1-5922-a49d-c50c09a1a045", 00:18:26.278 "is_configured": true, 00:18:26.278 "data_offset": 2048, 00:18:26.278 "data_size": 63488 00:18:26.278 }, 00:18:26.278 { 00:18:26.278 "name": "BaseBdev4", 00:18:26.278 "uuid": "1138a95d-82b9-5175-807f-dfae29a0f109", 00:18:26.278 "is_configured": true, 00:18:26.278 "data_offset": 2048, 00:18:26.278 "data_size": 63488 00:18:26.278 } 00:18:26.278 ] 00:18:26.278 }' 00:18:26.278 16:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:26.278 16:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:26.278 16:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:26.278 16:34:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:26.278 16:34:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:26.278 16:34:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.278 16:34:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.278 16:34:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.278 16:34:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:26.278 16:34:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.278 16:34:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.278 [2024-12-06 16:34:08.020427] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:26.278 [2024-12-06 16:34:08.020508] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:26.278 [2024-12-06 16:34:08.020532] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:18:26.278 [2024-12-06 16:34:08.020544] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:26.278 [2024-12-06 16:34:08.021066] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:26.278 [2024-12-06 16:34:08.021106] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:26.278 [2024-12-06 16:34:08.021192] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:26.278 [2024-12-06 16:34:08.021238] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:26.278 [2024-12-06 16:34:08.021247] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:26.278 [2024-12-06 16:34:08.021260] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:26.278 BaseBdev1 00:18:26.278 16:34:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.278 16:34:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:27.221 16:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:27.221 16:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:27.221 16:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:27.221 16:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:27.221 16:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:27.221 16:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:27.221 16:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.221 16:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.221 16:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.221 16:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.221 16:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.221 16:34:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.221 16:34:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.221 16:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.221 16:34:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.480 16:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.480 "name": "raid_bdev1", 00:18:27.480 "uuid": "4ab232dc-fe50-4721-8def-5182b3aa0691", 00:18:27.480 "strip_size_kb": 64, 00:18:27.480 "state": "online", 00:18:27.480 "raid_level": "raid5f", 00:18:27.480 "superblock": true, 00:18:27.480 "num_base_bdevs": 4, 00:18:27.480 "num_base_bdevs_discovered": 3, 00:18:27.480 "num_base_bdevs_operational": 3, 00:18:27.480 "base_bdevs_list": [ 00:18:27.480 { 00:18:27.480 "name": null, 00:18:27.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.480 "is_configured": false, 00:18:27.480 "data_offset": 0, 00:18:27.480 "data_size": 63488 00:18:27.480 }, 00:18:27.480 { 00:18:27.480 "name": "BaseBdev2", 00:18:27.480 "uuid": "28b3a80e-7999-516f-90ad-974a553369e7", 00:18:27.480 "is_configured": true, 00:18:27.480 "data_offset": 2048, 00:18:27.480 "data_size": 63488 00:18:27.480 }, 00:18:27.480 { 00:18:27.480 "name": "BaseBdev3", 00:18:27.480 "uuid": "8d29573c-6ae1-5922-a49d-c50c09a1a045", 00:18:27.480 "is_configured": true, 00:18:27.480 "data_offset": 2048, 00:18:27.480 "data_size": 63488 00:18:27.480 }, 00:18:27.480 { 00:18:27.480 "name": "BaseBdev4", 00:18:27.480 "uuid": "1138a95d-82b9-5175-807f-dfae29a0f109", 00:18:27.480 "is_configured": true, 00:18:27.480 "data_offset": 2048, 00:18:27.480 "data_size": 63488 00:18:27.480 } 00:18:27.480 ] 00:18:27.480 }' 00:18:27.480 16:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.480 16:34:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.739 16:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:27.739 16:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:27.739 16:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:27.739 16:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:27.739 16:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:27.739 16:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.739 16:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.739 16:34:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.739 16:34:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.739 16:34:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.739 16:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:27.739 "name": "raid_bdev1", 00:18:27.739 "uuid": "4ab232dc-fe50-4721-8def-5182b3aa0691", 00:18:27.739 "strip_size_kb": 64, 00:18:27.739 "state": "online", 00:18:27.739 "raid_level": "raid5f", 00:18:27.739 "superblock": true, 00:18:27.739 "num_base_bdevs": 4, 00:18:27.739 "num_base_bdevs_discovered": 3, 00:18:27.739 "num_base_bdevs_operational": 3, 00:18:27.739 "base_bdevs_list": [ 00:18:27.739 { 00:18:27.739 "name": null, 00:18:27.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.739 "is_configured": false, 00:18:27.739 "data_offset": 0, 00:18:27.739 "data_size": 63488 00:18:27.739 }, 00:18:27.739 { 00:18:27.739 "name": "BaseBdev2", 00:18:27.739 "uuid": "28b3a80e-7999-516f-90ad-974a553369e7", 00:18:27.739 "is_configured": true, 00:18:27.739 "data_offset": 2048, 00:18:27.739 "data_size": 63488 00:18:27.739 }, 00:18:27.739 { 00:18:27.739 "name": "BaseBdev3", 00:18:27.739 "uuid": "8d29573c-6ae1-5922-a49d-c50c09a1a045", 00:18:27.739 "is_configured": true, 00:18:27.739 "data_offset": 2048, 00:18:27.739 "data_size": 63488 00:18:27.739 }, 00:18:27.739 { 00:18:27.739 "name": "BaseBdev4", 00:18:27.739 "uuid": "1138a95d-82b9-5175-807f-dfae29a0f109", 00:18:27.739 "is_configured": true, 00:18:27.739 "data_offset": 2048, 00:18:27.739 "data_size": 63488 00:18:27.739 } 00:18:27.739 ] 00:18:27.739 }' 00:18:27.739 16:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:27.739 16:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:27.739 16:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:27.739 16:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:27.739 16:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:27.739 16:34:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:18:27.739 16:34:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:27.739 16:34:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:27.739 16:34:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:27.739 16:34:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:27.739 16:34:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:27.739 16:34:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:27.739 16:34:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.739 16:34:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.739 [2024-12-06 16:34:09.537985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:27.739 [2024-12-06 16:34:09.538238] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:27.739 [2024-12-06 16:34:09.538259] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:27.739 request: 00:18:27.739 { 00:18:27.739 "base_bdev": "BaseBdev1", 00:18:27.739 "raid_bdev": "raid_bdev1", 00:18:27.739 "method": "bdev_raid_add_base_bdev", 00:18:27.739 "req_id": 1 00:18:27.739 } 00:18:27.739 Got JSON-RPC error response 00:18:27.739 response: 00:18:27.739 { 00:18:27.739 "code": -22, 00:18:27.739 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:27.739 } 00:18:27.739 16:34:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:27.739 16:34:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:18:27.739 16:34:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:27.739 16:34:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:27.739 16:34:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:27.739 16:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:29.113 16:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:29.113 16:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:29.113 16:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:29.113 16:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:29.113 16:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:29.113 16:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:29.113 16:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.113 16:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.113 16:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.113 16:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.113 16:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.113 16:34:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.113 16:34:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.113 16:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.113 16:34:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.113 16:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.113 "name": "raid_bdev1", 00:18:29.113 "uuid": "4ab232dc-fe50-4721-8def-5182b3aa0691", 00:18:29.113 "strip_size_kb": 64, 00:18:29.113 "state": "online", 00:18:29.113 "raid_level": "raid5f", 00:18:29.113 "superblock": true, 00:18:29.113 "num_base_bdevs": 4, 00:18:29.113 "num_base_bdevs_discovered": 3, 00:18:29.113 "num_base_bdevs_operational": 3, 00:18:29.113 "base_bdevs_list": [ 00:18:29.113 { 00:18:29.113 "name": null, 00:18:29.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.113 "is_configured": false, 00:18:29.113 "data_offset": 0, 00:18:29.113 "data_size": 63488 00:18:29.113 }, 00:18:29.113 { 00:18:29.113 "name": "BaseBdev2", 00:18:29.113 "uuid": "28b3a80e-7999-516f-90ad-974a553369e7", 00:18:29.113 "is_configured": true, 00:18:29.113 "data_offset": 2048, 00:18:29.113 "data_size": 63488 00:18:29.113 }, 00:18:29.113 { 00:18:29.113 "name": "BaseBdev3", 00:18:29.113 "uuid": "8d29573c-6ae1-5922-a49d-c50c09a1a045", 00:18:29.113 "is_configured": true, 00:18:29.113 "data_offset": 2048, 00:18:29.113 "data_size": 63488 00:18:29.113 }, 00:18:29.113 { 00:18:29.113 "name": "BaseBdev4", 00:18:29.114 "uuid": "1138a95d-82b9-5175-807f-dfae29a0f109", 00:18:29.114 "is_configured": true, 00:18:29.114 "data_offset": 2048, 00:18:29.114 "data_size": 63488 00:18:29.114 } 00:18:29.114 ] 00:18:29.114 }' 00:18:29.114 16:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.114 16:34:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.372 16:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:29.372 16:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:29.372 16:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:29.372 16:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:29.372 16:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:29.372 16:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.372 16:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.372 16:34:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.372 16:34:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.372 16:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.372 16:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:29.372 "name": "raid_bdev1", 00:18:29.372 "uuid": "4ab232dc-fe50-4721-8def-5182b3aa0691", 00:18:29.372 "strip_size_kb": 64, 00:18:29.372 "state": "online", 00:18:29.372 "raid_level": "raid5f", 00:18:29.372 "superblock": true, 00:18:29.372 "num_base_bdevs": 4, 00:18:29.372 "num_base_bdevs_discovered": 3, 00:18:29.372 "num_base_bdevs_operational": 3, 00:18:29.372 "base_bdevs_list": [ 00:18:29.372 { 00:18:29.372 "name": null, 00:18:29.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.372 "is_configured": false, 00:18:29.372 "data_offset": 0, 00:18:29.372 "data_size": 63488 00:18:29.372 }, 00:18:29.372 { 00:18:29.372 "name": "BaseBdev2", 00:18:29.372 "uuid": "28b3a80e-7999-516f-90ad-974a553369e7", 00:18:29.372 "is_configured": true, 00:18:29.372 "data_offset": 2048, 00:18:29.372 "data_size": 63488 00:18:29.372 }, 00:18:29.372 { 00:18:29.372 "name": "BaseBdev3", 00:18:29.372 "uuid": "8d29573c-6ae1-5922-a49d-c50c09a1a045", 00:18:29.372 "is_configured": true, 00:18:29.372 "data_offset": 2048, 00:18:29.372 "data_size": 63488 00:18:29.372 }, 00:18:29.372 { 00:18:29.372 "name": "BaseBdev4", 00:18:29.372 "uuid": "1138a95d-82b9-5175-807f-dfae29a0f109", 00:18:29.372 "is_configured": true, 00:18:29.372 "data_offset": 2048, 00:18:29.372 "data_size": 63488 00:18:29.372 } 00:18:29.372 ] 00:18:29.372 }' 00:18:29.372 16:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:29.372 16:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:29.372 16:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:29.372 16:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:29.372 16:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 96013 00:18:29.372 16:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 96013 ']' 00:18:29.372 16:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 96013 00:18:29.372 16:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:18:29.372 16:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:29.372 16:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96013 00:18:29.372 16:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:29.372 16:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:29.372 killing process with pid 96013 00:18:29.372 Received shutdown signal, test time was about 60.000000 seconds 00:18:29.372 00:18:29.372 Latency(us) 00:18:29.372 [2024-12-06T16:34:11.211Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.372 [2024-12-06T16:34:11.211Z] =================================================================================================================== 00:18:29.372 [2024-12-06T16:34:11.211Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:29.372 16:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96013' 00:18:29.372 16:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 96013 00:18:29.372 [2024-12-06 16:34:11.170791] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:29.372 16:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 96013 00:18:29.372 [2024-12-06 16:34:11.170912] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:29.372 [2024-12-06 16:34:11.170998] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:29.372 [2024-12-06 16:34:11.171008] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:18:29.631 [2024-12-06 16:34:11.221716] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:29.631 16:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:18:29.631 00:18:29.631 real 0m24.772s 00:18:29.631 user 0m31.362s 00:18:29.631 sys 0m2.816s 00:18:29.631 16:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:29.631 16:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.631 ************************************ 00:18:29.631 END TEST raid5f_rebuild_test_sb 00:18:29.631 ************************************ 00:18:29.631 16:34:11 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:18:29.631 16:34:11 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:18:29.631 16:34:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:29.631 16:34:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:29.631 16:34:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:29.890 ************************************ 00:18:29.890 START TEST raid_state_function_test_sb_4k 00:18:29.890 ************************************ 00:18:29.890 16:34:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:18:29.890 16:34:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:29.890 16:34:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:29.890 16:34:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:29.890 16:34:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:29.890 16:34:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:29.890 16:34:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:29.890 16:34:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:29.890 16:34:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:29.890 16:34:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:29.890 16:34:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:29.890 16:34:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:29.891 16:34:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:29.891 16:34:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:29.891 16:34:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:29.891 16:34:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:29.891 16:34:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:29.891 16:34:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:29.891 16:34:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:29.891 16:34:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:29.891 16:34:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:29.891 16:34:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:29.891 16:34:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:29.891 16:34:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:29.891 16:34:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=96801 00:18:29.891 16:34:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 96801' 00:18:29.891 Process raid pid: 96801 00:18:29.891 16:34:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 96801 00:18:29.891 16:34:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 96801 ']' 00:18:29.891 16:34:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.891 16:34:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:29.891 16:34:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:29.891 16:34:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:29.891 16:34:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:29.891 [2024-12-06 16:34:11.552729] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:18:29.891 [2024-12-06 16:34:11.552918] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:29.891 [2024-12-06 16:34:11.726744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.150 [2024-12-06 16:34:11.752069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.150 [2024-12-06 16:34:11.794880] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:30.150 [2024-12-06 16:34:11.794994] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:30.718 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:30.718 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:18:30.718 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:30.718 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.718 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:30.718 [2024-12-06 16:34:12.421745] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:30.718 [2024-12-06 16:34:12.421864] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:30.718 [2024-12-06 16:34:12.421881] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:30.718 [2024-12-06 16:34:12.421891] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:30.718 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.718 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:30.718 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:30.718 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:30.718 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:30.718 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:30.718 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:30.718 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.718 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.718 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.718 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.718 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.718 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.718 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:30.718 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:30.718 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.978 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.978 "name": "Existed_Raid", 00:18:30.978 "uuid": "94545a2d-f09e-455d-a916-29569eb3b321", 00:18:30.978 "strip_size_kb": 0, 00:18:30.978 "state": "configuring", 00:18:30.978 "raid_level": "raid1", 00:18:30.978 "superblock": true, 00:18:30.978 "num_base_bdevs": 2, 00:18:30.978 "num_base_bdevs_discovered": 0, 00:18:30.978 "num_base_bdevs_operational": 2, 00:18:30.978 "base_bdevs_list": [ 00:18:30.978 { 00:18:30.978 "name": "BaseBdev1", 00:18:30.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.978 "is_configured": false, 00:18:30.978 "data_offset": 0, 00:18:30.978 "data_size": 0 00:18:30.978 }, 00:18:30.978 { 00:18:30.978 "name": "BaseBdev2", 00:18:30.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.978 "is_configured": false, 00:18:30.978 "data_offset": 0, 00:18:30.978 "data_size": 0 00:18:30.978 } 00:18:30.978 ] 00:18:30.978 }' 00:18:30.978 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.978 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:31.238 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:31.238 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.238 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:31.238 [2024-12-06 16:34:12.912789] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:31.238 [2024-12-06 16:34:12.912842] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:18:31.238 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.238 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:31.238 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.238 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:31.238 [2024-12-06 16:34:12.924768] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:31.238 [2024-12-06 16:34:12.924848] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:31.238 [2024-12-06 16:34:12.924878] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:31.238 [2024-12-06 16:34:12.924901] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:31.238 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.238 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:18:31.238 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.238 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:31.238 [2024-12-06 16:34:12.945736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:31.238 BaseBdev1 00:18:31.238 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.238 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:31.238 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:31.238 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:31.238 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:18:31.238 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:31.238 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:31.238 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:31.238 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.238 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:31.238 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.238 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:31.238 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.238 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:31.238 [ 00:18:31.238 { 00:18:31.238 "name": "BaseBdev1", 00:18:31.238 "aliases": [ 00:18:31.238 "258dc9a3-8123-46fe-9eb0-4b54c8740032" 00:18:31.238 ], 00:18:31.238 "product_name": "Malloc disk", 00:18:31.238 "block_size": 4096, 00:18:31.238 "num_blocks": 8192, 00:18:31.238 "uuid": "258dc9a3-8123-46fe-9eb0-4b54c8740032", 00:18:31.238 "assigned_rate_limits": { 00:18:31.238 "rw_ios_per_sec": 0, 00:18:31.238 "rw_mbytes_per_sec": 0, 00:18:31.238 "r_mbytes_per_sec": 0, 00:18:31.238 "w_mbytes_per_sec": 0 00:18:31.238 }, 00:18:31.238 "claimed": true, 00:18:31.238 "claim_type": "exclusive_write", 00:18:31.238 "zoned": false, 00:18:31.238 "supported_io_types": { 00:18:31.238 "read": true, 00:18:31.238 "write": true, 00:18:31.238 "unmap": true, 00:18:31.238 "flush": true, 00:18:31.238 "reset": true, 00:18:31.238 "nvme_admin": false, 00:18:31.238 "nvme_io": false, 00:18:31.238 "nvme_io_md": false, 00:18:31.238 "write_zeroes": true, 00:18:31.238 "zcopy": true, 00:18:31.238 "get_zone_info": false, 00:18:31.238 "zone_management": false, 00:18:31.238 "zone_append": false, 00:18:31.238 "compare": false, 00:18:31.238 "compare_and_write": false, 00:18:31.238 "abort": true, 00:18:31.238 "seek_hole": false, 00:18:31.238 "seek_data": false, 00:18:31.238 "copy": true, 00:18:31.238 "nvme_iov_md": false 00:18:31.238 }, 00:18:31.238 "memory_domains": [ 00:18:31.238 { 00:18:31.238 "dma_device_id": "system", 00:18:31.238 "dma_device_type": 1 00:18:31.238 }, 00:18:31.238 { 00:18:31.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:31.238 "dma_device_type": 2 00:18:31.238 } 00:18:31.238 ], 00:18:31.238 "driver_specific": {} 00:18:31.238 } 00:18:31.238 ] 00:18:31.238 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.238 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:18:31.238 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:31.238 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:31.238 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:31.238 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:31.238 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:31.238 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:31.238 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.238 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.238 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.239 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.239 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.239 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:31.239 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.239 16:34:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:31.239 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.239 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.239 "name": "Existed_Raid", 00:18:31.239 "uuid": "0e1ec7f3-4f47-45c5-a351-f65820466b1e", 00:18:31.239 "strip_size_kb": 0, 00:18:31.239 "state": "configuring", 00:18:31.239 "raid_level": "raid1", 00:18:31.239 "superblock": true, 00:18:31.239 "num_base_bdevs": 2, 00:18:31.239 "num_base_bdevs_discovered": 1, 00:18:31.239 "num_base_bdevs_operational": 2, 00:18:31.239 "base_bdevs_list": [ 00:18:31.239 { 00:18:31.239 "name": "BaseBdev1", 00:18:31.239 "uuid": "258dc9a3-8123-46fe-9eb0-4b54c8740032", 00:18:31.239 "is_configured": true, 00:18:31.239 "data_offset": 256, 00:18:31.239 "data_size": 7936 00:18:31.239 }, 00:18:31.239 { 00:18:31.239 "name": "BaseBdev2", 00:18:31.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.239 "is_configured": false, 00:18:31.239 "data_offset": 0, 00:18:31.239 "data_size": 0 00:18:31.239 } 00:18:31.239 ] 00:18:31.239 }' 00:18:31.239 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.239 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:31.813 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:31.813 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.813 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:31.813 [2024-12-06 16:34:13.369074] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:31.813 [2024-12-06 16:34:13.369130] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:18:31.813 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.813 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:31.813 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.813 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:31.813 [2024-12-06 16:34:13.381087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:31.813 [2024-12-06 16:34:13.382973] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:31.813 [2024-12-06 16:34:13.383063] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:31.813 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.813 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:31.813 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:31.813 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:31.813 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:31.813 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:31.813 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:31.813 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:31.813 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:31.813 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.813 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.813 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.813 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.813 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:31.813 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.813 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.813 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:31.813 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.813 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.813 "name": "Existed_Raid", 00:18:31.813 "uuid": "024efab8-9a2c-469e-b0c8-8755d6e08cc8", 00:18:31.813 "strip_size_kb": 0, 00:18:31.813 "state": "configuring", 00:18:31.813 "raid_level": "raid1", 00:18:31.813 "superblock": true, 00:18:31.813 "num_base_bdevs": 2, 00:18:31.813 "num_base_bdevs_discovered": 1, 00:18:31.813 "num_base_bdevs_operational": 2, 00:18:31.813 "base_bdevs_list": [ 00:18:31.813 { 00:18:31.813 "name": "BaseBdev1", 00:18:31.813 "uuid": "258dc9a3-8123-46fe-9eb0-4b54c8740032", 00:18:31.813 "is_configured": true, 00:18:31.813 "data_offset": 256, 00:18:31.813 "data_size": 7936 00:18:31.813 }, 00:18:31.813 { 00:18:31.813 "name": "BaseBdev2", 00:18:31.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.813 "is_configured": false, 00:18:31.813 "data_offset": 0, 00:18:31.813 "data_size": 0 00:18:31.813 } 00:18:31.813 ] 00:18:31.813 }' 00:18:31.813 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.813 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:32.079 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:18:32.079 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.079 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:32.079 [2024-12-06 16:34:13.784057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:32.079 [2024-12-06 16:34:13.784319] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:18:32.079 [2024-12-06 16:34:13.784344] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:32.079 BaseBdev2 00:18:32.079 [2024-12-06 16:34:13.784677] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:18:32.079 [2024-12-06 16:34:13.784851] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:18:32.079 [2024-12-06 16:34:13.784868] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:18:32.079 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.079 [2024-12-06 16:34:13.785004] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:32.079 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:32.079 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:32.079 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:32.079 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:18:32.079 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:32.079 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:32.079 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:32.079 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.079 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:32.079 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.079 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:32.079 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.079 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:32.079 [ 00:18:32.079 { 00:18:32.079 "name": "BaseBdev2", 00:18:32.079 "aliases": [ 00:18:32.079 "1f799c9a-8292-4e42-aa0d-7174bc0999a3" 00:18:32.079 ], 00:18:32.079 "product_name": "Malloc disk", 00:18:32.079 "block_size": 4096, 00:18:32.079 "num_blocks": 8192, 00:18:32.079 "uuid": "1f799c9a-8292-4e42-aa0d-7174bc0999a3", 00:18:32.079 "assigned_rate_limits": { 00:18:32.079 "rw_ios_per_sec": 0, 00:18:32.079 "rw_mbytes_per_sec": 0, 00:18:32.079 "r_mbytes_per_sec": 0, 00:18:32.079 "w_mbytes_per_sec": 0 00:18:32.079 }, 00:18:32.079 "claimed": true, 00:18:32.079 "claim_type": "exclusive_write", 00:18:32.079 "zoned": false, 00:18:32.079 "supported_io_types": { 00:18:32.079 "read": true, 00:18:32.079 "write": true, 00:18:32.079 "unmap": true, 00:18:32.079 "flush": true, 00:18:32.079 "reset": true, 00:18:32.079 "nvme_admin": false, 00:18:32.079 "nvme_io": false, 00:18:32.079 "nvme_io_md": false, 00:18:32.079 "write_zeroes": true, 00:18:32.079 "zcopy": true, 00:18:32.079 "get_zone_info": false, 00:18:32.079 "zone_management": false, 00:18:32.079 "zone_append": false, 00:18:32.079 "compare": false, 00:18:32.079 "compare_and_write": false, 00:18:32.079 "abort": true, 00:18:32.079 "seek_hole": false, 00:18:32.079 "seek_data": false, 00:18:32.079 "copy": true, 00:18:32.079 "nvme_iov_md": false 00:18:32.079 }, 00:18:32.079 "memory_domains": [ 00:18:32.079 { 00:18:32.079 "dma_device_id": "system", 00:18:32.079 "dma_device_type": 1 00:18:32.079 }, 00:18:32.079 { 00:18:32.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:32.079 "dma_device_type": 2 00:18:32.079 } 00:18:32.079 ], 00:18:32.079 "driver_specific": {} 00:18:32.079 } 00:18:32.079 ] 00:18:32.079 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.079 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:18:32.079 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:32.079 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:32.079 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:32.079 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:32.079 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:32.079 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:32.079 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:32.079 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:32.079 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.079 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.079 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.079 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.079 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.079 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.079 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:32.079 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:32.079 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.079 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.079 "name": "Existed_Raid", 00:18:32.079 "uuid": "024efab8-9a2c-469e-b0c8-8755d6e08cc8", 00:18:32.079 "strip_size_kb": 0, 00:18:32.079 "state": "online", 00:18:32.079 "raid_level": "raid1", 00:18:32.079 "superblock": true, 00:18:32.079 "num_base_bdevs": 2, 00:18:32.079 "num_base_bdevs_discovered": 2, 00:18:32.079 "num_base_bdevs_operational": 2, 00:18:32.079 "base_bdevs_list": [ 00:18:32.079 { 00:18:32.079 "name": "BaseBdev1", 00:18:32.079 "uuid": "258dc9a3-8123-46fe-9eb0-4b54c8740032", 00:18:32.079 "is_configured": true, 00:18:32.079 "data_offset": 256, 00:18:32.079 "data_size": 7936 00:18:32.079 }, 00:18:32.079 { 00:18:32.079 "name": "BaseBdev2", 00:18:32.079 "uuid": "1f799c9a-8292-4e42-aa0d-7174bc0999a3", 00:18:32.079 "is_configured": true, 00:18:32.079 "data_offset": 256, 00:18:32.079 "data_size": 7936 00:18:32.079 } 00:18:32.079 ] 00:18:32.079 }' 00:18:32.079 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.079 16:34:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:32.646 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:32.646 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:32.646 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:32.646 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:32.646 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:18:32.646 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:32.646 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:32.646 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:32.646 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.646 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:32.646 [2024-12-06 16:34:14.231712] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:32.646 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.646 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:32.646 "name": "Existed_Raid", 00:18:32.646 "aliases": [ 00:18:32.646 "024efab8-9a2c-469e-b0c8-8755d6e08cc8" 00:18:32.646 ], 00:18:32.646 "product_name": "Raid Volume", 00:18:32.646 "block_size": 4096, 00:18:32.646 "num_blocks": 7936, 00:18:32.646 "uuid": "024efab8-9a2c-469e-b0c8-8755d6e08cc8", 00:18:32.646 "assigned_rate_limits": { 00:18:32.646 "rw_ios_per_sec": 0, 00:18:32.646 "rw_mbytes_per_sec": 0, 00:18:32.646 "r_mbytes_per_sec": 0, 00:18:32.646 "w_mbytes_per_sec": 0 00:18:32.646 }, 00:18:32.646 "claimed": false, 00:18:32.646 "zoned": false, 00:18:32.646 "supported_io_types": { 00:18:32.646 "read": true, 00:18:32.646 "write": true, 00:18:32.646 "unmap": false, 00:18:32.646 "flush": false, 00:18:32.646 "reset": true, 00:18:32.646 "nvme_admin": false, 00:18:32.646 "nvme_io": false, 00:18:32.646 "nvme_io_md": false, 00:18:32.646 "write_zeroes": true, 00:18:32.646 "zcopy": false, 00:18:32.646 "get_zone_info": false, 00:18:32.646 "zone_management": false, 00:18:32.646 "zone_append": false, 00:18:32.646 "compare": false, 00:18:32.646 "compare_and_write": false, 00:18:32.646 "abort": false, 00:18:32.646 "seek_hole": false, 00:18:32.646 "seek_data": false, 00:18:32.646 "copy": false, 00:18:32.646 "nvme_iov_md": false 00:18:32.646 }, 00:18:32.646 "memory_domains": [ 00:18:32.646 { 00:18:32.646 "dma_device_id": "system", 00:18:32.646 "dma_device_type": 1 00:18:32.646 }, 00:18:32.646 { 00:18:32.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:32.646 "dma_device_type": 2 00:18:32.646 }, 00:18:32.646 { 00:18:32.646 "dma_device_id": "system", 00:18:32.646 "dma_device_type": 1 00:18:32.646 }, 00:18:32.646 { 00:18:32.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:32.646 "dma_device_type": 2 00:18:32.646 } 00:18:32.646 ], 00:18:32.646 "driver_specific": { 00:18:32.646 "raid": { 00:18:32.646 "uuid": "024efab8-9a2c-469e-b0c8-8755d6e08cc8", 00:18:32.646 "strip_size_kb": 0, 00:18:32.646 "state": "online", 00:18:32.646 "raid_level": "raid1", 00:18:32.646 "superblock": true, 00:18:32.646 "num_base_bdevs": 2, 00:18:32.646 "num_base_bdevs_discovered": 2, 00:18:32.646 "num_base_bdevs_operational": 2, 00:18:32.646 "base_bdevs_list": [ 00:18:32.646 { 00:18:32.646 "name": "BaseBdev1", 00:18:32.646 "uuid": "258dc9a3-8123-46fe-9eb0-4b54c8740032", 00:18:32.646 "is_configured": true, 00:18:32.646 "data_offset": 256, 00:18:32.646 "data_size": 7936 00:18:32.646 }, 00:18:32.646 { 00:18:32.646 "name": "BaseBdev2", 00:18:32.646 "uuid": "1f799c9a-8292-4e42-aa0d-7174bc0999a3", 00:18:32.646 "is_configured": true, 00:18:32.646 "data_offset": 256, 00:18:32.646 "data_size": 7936 00:18:32.646 } 00:18:32.646 ] 00:18:32.646 } 00:18:32.646 } 00:18:32.646 }' 00:18:32.646 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:32.646 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:32.646 BaseBdev2' 00:18:32.646 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:32.646 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:18:32.646 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:32.646 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:32.646 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:32.646 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.646 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:32.646 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.646 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:32.646 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:32.646 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:32.646 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:32.647 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:32.647 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.647 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:32.647 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.647 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:32.647 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:32.647 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:32.647 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.647 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:32.647 [2024-12-06 16:34:14.431100] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:32.647 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.647 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:32.647 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:32.647 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:32.647 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:18:32.647 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:32.647 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:32.647 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:32.647 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:32.647 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:32.647 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:32.647 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:32.647 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.647 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.647 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.647 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.647 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:32.647 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.647 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.647 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:32.647 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.905 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.905 "name": "Existed_Raid", 00:18:32.905 "uuid": "024efab8-9a2c-469e-b0c8-8755d6e08cc8", 00:18:32.905 "strip_size_kb": 0, 00:18:32.905 "state": "online", 00:18:32.905 "raid_level": "raid1", 00:18:32.905 "superblock": true, 00:18:32.905 "num_base_bdevs": 2, 00:18:32.905 "num_base_bdevs_discovered": 1, 00:18:32.905 "num_base_bdevs_operational": 1, 00:18:32.905 "base_bdevs_list": [ 00:18:32.905 { 00:18:32.905 "name": null, 00:18:32.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.905 "is_configured": false, 00:18:32.905 "data_offset": 0, 00:18:32.905 "data_size": 7936 00:18:32.905 }, 00:18:32.905 { 00:18:32.905 "name": "BaseBdev2", 00:18:32.905 "uuid": "1f799c9a-8292-4e42-aa0d-7174bc0999a3", 00:18:32.905 "is_configured": true, 00:18:32.905 "data_offset": 256, 00:18:32.905 "data_size": 7936 00:18:32.905 } 00:18:32.905 ] 00:18:32.905 }' 00:18:32.905 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.905 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:33.164 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:33.164 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:33.164 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:33.164 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.164 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.164 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:33.164 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.164 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:33.164 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:33.164 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:33.164 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.164 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:33.164 [2024-12-06 16:34:14.962171] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:33.164 [2024-12-06 16:34:14.962286] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:33.164 [2024-12-06 16:34:14.974421] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:33.164 [2024-12-06 16:34:14.974557] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:33.164 [2024-12-06 16:34:14.974604] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:18:33.164 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.164 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:33.164 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:33.164 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.164 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:33.164 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.164 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:33.164 16:34:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.437 16:34:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:33.437 16:34:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:33.437 16:34:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:33.437 16:34:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 96801 00:18:33.437 16:34:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 96801 ']' 00:18:33.437 16:34:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 96801 00:18:33.437 16:34:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:18:33.437 16:34:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:33.437 16:34:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96801 00:18:33.437 16:34:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:33.437 16:34:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:33.437 16:34:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96801' 00:18:33.437 killing process with pid 96801 00:18:33.437 16:34:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 96801 00:18:33.437 16:34:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 96801 00:18:33.437 [2024-12-06 16:34:15.061216] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:33.437 [2024-12-06 16:34:15.062237] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:33.697 16:34:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:18:33.697 00:18:33.697 real 0m3.799s 00:18:33.697 user 0m5.987s 00:18:33.697 sys 0m0.777s 00:18:33.697 ************************************ 00:18:33.697 END TEST raid_state_function_test_sb_4k 00:18:33.697 ************************************ 00:18:33.697 16:34:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:33.697 16:34:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:33.697 16:34:15 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:18:33.697 16:34:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:33.697 16:34:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:33.697 16:34:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:33.697 ************************************ 00:18:33.697 START TEST raid_superblock_test_4k 00:18:33.697 ************************************ 00:18:33.697 16:34:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:18:33.697 16:34:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:33.697 16:34:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:33.697 16:34:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:33.697 16:34:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:33.697 16:34:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:33.697 16:34:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:33.697 16:34:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:33.697 16:34:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:33.697 16:34:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:33.697 16:34:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:33.697 16:34:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:33.697 16:34:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:33.697 16:34:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:33.697 16:34:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:33.697 16:34:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:33.697 16:34:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=97037 00:18:33.697 16:34:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:33.697 16:34:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 97037 00:18:33.697 16:34:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 97037 ']' 00:18:33.697 16:34:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.697 16:34:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:33.697 16:34:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.697 16:34:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:33.697 16:34:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:33.697 [2024-12-06 16:34:15.436419] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:18:33.697 [2024-12-06 16:34:15.436687] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97037 ] 00:18:33.955 [2024-12-06 16:34:15.616058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.955 [2024-12-06 16:34:15.645851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.955 [2024-12-06 16:34:15.692420] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:33.955 [2024-12-06 16:34:15.692530] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:34.521 16:34:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:34.521 16:34:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:18:34.521 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:34.521 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:34.521 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:34.521 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:34.521 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:34.521 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:34.521 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:34.521 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:34.521 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:18:34.521 16:34:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.521 16:34:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:34.521 malloc1 00:18:34.521 16:34:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.521 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:34.521 16:34:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.521 16:34:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:34.521 [2024-12-06 16:34:16.334787] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:34.521 [2024-12-06 16:34:16.334930] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.521 [2024-12-06 16:34:16.334972] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:34.521 [2024-12-06 16:34:16.335013] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.521 [2024-12-06 16:34:16.337531] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.521 [2024-12-06 16:34:16.337615] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:34.521 pt1 00:18:34.521 16:34:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.521 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:34.521 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:34.521 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:34.521 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:34.521 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:34.521 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:34.521 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:34.521 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:34.521 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:18:34.521 16:34:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.521 16:34:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:34.521 malloc2 00:18:34.521 16:34:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.521 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:34.521 16:34:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.521 16:34:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:34.779 [2024-12-06 16:34:16.360239] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:34.779 [2024-12-06 16:34:16.360362] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.779 [2024-12-06 16:34:16.360416] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:34.779 [2024-12-06 16:34:16.360461] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.779 [2024-12-06 16:34:16.362895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.779 [2024-12-06 16:34:16.362970] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:34.779 pt2 00:18:34.779 16:34:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.779 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:34.779 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:34.779 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:34.779 16:34:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.779 16:34:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:34.779 [2024-12-06 16:34:16.372265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:34.779 [2024-12-06 16:34:16.374372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:34.779 [2024-12-06 16:34:16.374574] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:18:34.779 [2024-12-06 16:34:16.374629] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:34.779 [2024-12-06 16:34:16.374962] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:18:34.779 [2024-12-06 16:34:16.375163] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:18:34.779 [2024-12-06 16:34:16.375225] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:18:34.779 [2024-12-06 16:34:16.375417] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:34.779 16:34:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.779 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:34.779 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:34.779 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:34.779 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:34.779 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:34.779 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:34.779 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:34.779 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:34.779 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:34.779 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:34.779 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.779 16:34:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.779 16:34:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:34.779 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.779 16:34:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.779 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:34.779 "name": "raid_bdev1", 00:18:34.779 "uuid": "f9cad38b-1499-42f1-a5d9-1dddfc160d01", 00:18:34.779 "strip_size_kb": 0, 00:18:34.779 "state": "online", 00:18:34.780 "raid_level": "raid1", 00:18:34.780 "superblock": true, 00:18:34.780 "num_base_bdevs": 2, 00:18:34.780 "num_base_bdevs_discovered": 2, 00:18:34.780 "num_base_bdevs_operational": 2, 00:18:34.780 "base_bdevs_list": [ 00:18:34.780 { 00:18:34.780 "name": "pt1", 00:18:34.780 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:34.780 "is_configured": true, 00:18:34.780 "data_offset": 256, 00:18:34.780 "data_size": 7936 00:18:34.780 }, 00:18:34.780 { 00:18:34.780 "name": "pt2", 00:18:34.780 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:34.780 "is_configured": true, 00:18:34.780 "data_offset": 256, 00:18:34.780 "data_size": 7936 00:18:34.780 } 00:18:34.780 ] 00:18:34.780 }' 00:18:34.780 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:34.780 16:34:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:35.039 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:35.039 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:35.039 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:35.039 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:35.039 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:18:35.039 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:35.039 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:35.039 16:34:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.039 16:34:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:35.039 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:35.039 [2024-12-06 16:34:16.783934] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:35.039 16:34:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.039 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:35.039 "name": "raid_bdev1", 00:18:35.039 "aliases": [ 00:18:35.039 "f9cad38b-1499-42f1-a5d9-1dddfc160d01" 00:18:35.039 ], 00:18:35.039 "product_name": "Raid Volume", 00:18:35.039 "block_size": 4096, 00:18:35.039 "num_blocks": 7936, 00:18:35.039 "uuid": "f9cad38b-1499-42f1-a5d9-1dddfc160d01", 00:18:35.039 "assigned_rate_limits": { 00:18:35.039 "rw_ios_per_sec": 0, 00:18:35.039 "rw_mbytes_per_sec": 0, 00:18:35.039 "r_mbytes_per_sec": 0, 00:18:35.039 "w_mbytes_per_sec": 0 00:18:35.039 }, 00:18:35.039 "claimed": false, 00:18:35.039 "zoned": false, 00:18:35.039 "supported_io_types": { 00:18:35.039 "read": true, 00:18:35.039 "write": true, 00:18:35.039 "unmap": false, 00:18:35.039 "flush": false, 00:18:35.039 "reset": true, 00:18:35.039 "nvme_admin": false, 00:18:35.039 "nvme_io": false, 00:18:35.039 "nvme_io_md": false, 00:18:35.039 "write_zeroes": true, 00:18:35.039 "zcopy": false, 00:18:35.039 "get_zone_info": false, 00:18:35.039 "zone_management": false, 00:18:35.039 "zone_append": false, 00:18:35.039 "compare": false, 00:18:35.039 "compare_and_write": false, 00:18:35.039 "abort": false, 00:18:35.039 "seek_hole": false, 00:18:35.039 "seek_data": false, 00:18:35.039 "copy": false, 00:18:35.039 "nvme_iov_md": false 00:18:35.039 }, 00:18:35.039 "memory_domains": [ 00:18:35.039 { 00:18:35.039 "dma_device_id": "system", 00:18:35.039 "dma_device_type": 1 00:18:35.039 }, 00:18:35.039 { 00:18:35.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:35.039 "dma_device_type": 2 00:18:35.039 }, 00:18:35.039 { 00:18:35.039 "dma_device_id": "system", 00:18:35.039 "dma_device_type": 1 00:18:35.039 }, 00:18:35.039 { 00:18:35.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:35.039 "dma_device_type": 2 00:18:35.039 } 00:18:35.039 ], 00:18:35.039 "driver_specific": { 00:18:35.039 "raid": { 00:18:35.039 "uuid": "f9cad38b-1499-42f1-a5d9-1dddfc160d01", 00:18:35.039 "strip_size_kb": 0, 00:18:35.039 "state": "online", 00:18:35.039 "raid_level": "raid1", 00:18:35.039 "superblock": true, 00:18:35.039 "num_base_bdevs": 2, 00:18:35.039 "num_base_bdevs_discovered": 2, 00:18:35.039 "num_base_bdevs_operational": 2, 00:18:35.039 "base_bdevs_list": [ 00:18:35.039 { 00:18:35.039 "name": "pt1", 00:18:35.039 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:35.039 "is_configured": true, 00:18:35.039 "data_offset": 256, 00:18:35.039 "data_size": 7936 00:18:35.039 }, 00:18:35.039 { 00:18:35.039 "name": "pt2", 00:18:35.039 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:35.039 "is_configured": true, 00:18:35.039 "data_offset": 256, 00:18:35.039 "data_size": 7936 00:18:35.039 } 00:18:35.039 ] 00:18:35.039 } 00:18:35.039 } 00:18:35.039 }' 00:18:35.039 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:35.039 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:35.039 pt2' 00:18:35.039 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:35.298 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:18:35.298 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:35.298 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:35.298 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:35.298 16:34:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.298 16:34:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:35.298 16:34:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.298 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:35.298 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:35.298 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:35.298 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:35.298 16:34:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.298 16:34:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:35.298 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:35.298 16:34:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.298 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:35.298 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:35.298 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:35.298 16:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:35.298 16:34:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.298 16:34:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:35.298 [2024-12-06 16:34:16.987502] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:35.298 16:34:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.298 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f9cad38b-1499-42f1-a5d9-1dddfc160d01 00:18:35.298 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z f9cad38b-1499-42f1-a5d9-1dddfc160d01 ']' 00:18:35.298 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:35.298 16:34:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.298 16:34:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:35.299 [2024-12-06 16:34:17.031158] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:35.299 [2024-12-06 16:34:17.031273] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:35.299 [2024-12-06 16:34:17.031403] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:35.299 [2024-12-06 16:34:17.031502] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:35.299 [2024-12-06 16:34:17.031552] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:18:35.299 16:34:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.299 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:35.299 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.299 16:34:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.299 16:34:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:35.299 16:34:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.299 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:35.299 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:35.299 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:35.299 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:35.299 16:34:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.299 16:34:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:35.299 16:34:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.299 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:35.299 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:35.299 16:34:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.299 16:34:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:35.299 16:34:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.299 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:35.299 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:35.299 16:34:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.299 16:34:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:35.299 16:34:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.299 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:35.557 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:35.558 16:34:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:18:35.558 16:34:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:35.558 16:34:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:35.558 16:34:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:35.558 16:34:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:35.558 16:34:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:35.558 16:34:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:35.558 16:34:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.558 16:34:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:35.558 [2024-12-06 16:34:17.146960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:35.558 [2024-12-06 16:34:17.149154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:35.558 [2024-12-06 16:34:17.149301] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:35.558 [2024-12-06 16:34:17.149373] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:35.558 [2024-12-06 16:34:17.149393] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:35.558 [2024-12-06 16:34:17.149404] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:18:35.558 request: 00:18:35.558 { 00:18:35.558 "name": "raid_bdev1", 00:18:35.558 "raid_level": "raid1", 00:18:35.558 "base_bdevs": [ 00:18:35.558 "malloc1", 00:18:35.558 "malloc2" 00:18:35.558 ], 00:18:35.558 "superblock": false, 00:18:35.558 "method": "bdev_raid_create", 00:18:35.558 "req_id": 1 00:18:35.558 } 00:18:35.558 Got JSON-RPC error response 00:18:35.558 response: 00:18:35.558 { 00:18:35.558 "code": -17, 00:18:35.558 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:35.558 } 00:18:35.558 16:34:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:35.558 16:34:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:18:35.558 16:34:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:35.558 16:34:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:35.558 16:34:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:35.558 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.558 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:35.558 16:34:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.558 16:34:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:35.558 16:34:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.558 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:35.558 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:35.558 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:35.558 16:34:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.558 16:34:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:35.558 [2024-12-06 16:34:17.190817] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:35.558 [2024-12-06 16:34:17.190923] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.558 [2024-12-06 16:34:17.190970] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:35.558 [2024-12-06 16:34:17.191001] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:35.558 [2024-12-06 16:34:17.193457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:35.558 [2024-12-06 16:34:17.193533] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:35.558 [2024-12-06 16:34:17.193644] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:35.558 [2024-12-06 16:34:17.193719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:35.558 pt1 00:18:35.558 16:34:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.558 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:35.558 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:35.558 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:35.558 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:35.558 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:35.558 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:35.558 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.558 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.558 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.558 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.558 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.558 16:34:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.558 16:34:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:35.558 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.558 16:34:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.558 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.558 "name": "raid_bdev1", 00:18:35.558 "uuid": "f9cad38b-1499-42f1-a5d9-1dddfc160d01", 00:18:35.558 "strip_size_kb": 0, 00:18:35.558 "state": "configuring", 00:18:35.558 "raid_level": "raid1", 00:18:35.558 "superblock": true, 00:18:35.558 "num_base_bdevs": 2, 00:18:35.558 "num_base_bdevs_discovered": 1, 00:18:35.558 "num_base_bdevs_operational": 2, 00:18:35.558 "base_bdevs_list": [ 00:18:35.558 { 00:18:35.558 "name": "pt1", 00:18:35.558 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:35.558 "is_configured": true, 00:18:35.558 "data_offset": 256, 00:18:35.558 "data_size": 7936 00:18:35.558 }, 00:18:35.558 { 00:18:35.558 "name": null, 00:18:35.558 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:35.558 "is_configured": false, 00:18:35.558 "data_offset": 256, 00:18:35.558 "data_size": 7936 00:18:35.558 } 00:18:35.558 ] 00:18:35.558 }' 00:18:35.558 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.558 16:34:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:35.816 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:35.816 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:35.816 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:35.816 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:35.816 16:34:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.816 16:34:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:35.817 [2024-12-06 16:34:17.558309] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:35.817 [2024-12-06 16:34:17.558449] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.817 [2024-12-06 16:34:17.558515] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:35.817 [2024-12-06 16:34:17.558555] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:35.817 [2024-12-06 16:34:17.559142] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:35.817 [2024-12-06 16:34:17.559225] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:35.817 [2024-12-06 16:34:17.559359] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:35.817 [2024-12-06 16:34:17.559414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:35.817 [2024-12-06 16:34:17.559567] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:18:35.817 [2024-12-06 16:34:17.559607] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:35.817 [2024-12-06 16:34:17.559893] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:18:35.817 [2024-12-06 16:34:17.560034] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:18:35.817 [2024-12-06 16:34:17.560058] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:18:35.817 [2024-12-06 16:34:17.560175] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:35.817 pt2 00:18:35.817 16:34:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.817 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:35.817 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:35.817 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:35.817 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:35.817 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:35.817 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:35.817 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:35.817 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:35.817 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.817 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.817 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.817 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.817 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.817 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.817 16:34:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.817 16:34:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:35.817 16:34:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.817 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.817 "name": "raid_bdev1", 00:18:35.817 "uuid": "f9cad38b-1499-42f1-a5d9-1dddfc160d01", 00:18:35.817 "strip_size_kb": 0, 00:18:35.817 "state": "online", 00:18:35.817 "raid_level": "raid1", 00:18:35.817 "superblock": true, 00:18:35.817 "num_base_bdevs": 2, 00:18:35.817 "num_base_bdevs_discovered": 2, 00:18:35.817 "num_base_bdevs_operational": 2, 00:18:35.817 "base_bdevs_list": [ 00:18:35.817 { 00:18:35.817 "name": "pt1", 00:18:35.817 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:35.817 "is_configured": true, 00:18:35.817 "data_offset": 256, 00:18:35.817 "data_size": 7936 00:18:35.817 }, 00:18:35.817 { 00:18:35.817 "name": "pt2", 00:18:35.817 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:35.817 "is_configured": true, 00:18:35.817 "data_offset": 256, 00:18:35.817 "data_size": 7936 00:18:35.817 } 00:18:35.817 ] 00:18:35.817 }' 00:18:35.817 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.817 16:34:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:36.382 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:36.382 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:36.382 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:36.382 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:36.382 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:18:36.382 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:36.382 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:36.382 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:36.382 16:34:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.382 16:34:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:36.382 [2024-12-06 16:34:17.922022] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:36.382 16:34:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.382 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:36.382 "name": "raid_bdev1", 00:18:36.382 "aliases": [ 00:18:36.382 "f9cad38b-1499-42f1-a5d9-1dddfc160d01" 00:18:36.382 ], 00:18:36.382 "product_name": "Raid Volume", 00:18:36.382 "block_size": 4096, 00:18:36.382 "num_blocks": 7936, 00:18:36.382 "uuid": "f9cad38b-1499-42f1-a5d9-1dddfc160d01", 00:18:36.382 "assigned_rate_limits": { 00:18:36.382 "rw_ios_per_sec": 0, 00:18:36.382 "rw_mbytes_per_sec": 0, 00:18:36.382 "r_mbytes_per_sec": 0, 00:18:36.382 "w_mbytes_per_sec": 0 00:18:36.382 }, 00:18:36.382 "claimed": false, 00:18:36.382 "zoned": false, 00:18:36.382 "supported_io_types": { 00:18:36.382 "read": true, 00:18:36.382 "write": true, 00:18:36.382 "unmap": false, 00:18:36.382 "flush": false, 00:18:36.382 "reset": true, 00:18:36.382 "nvme_admin": false, 00:18:36.382 "nvme_io": false, 00:18:36.382 "nvme_io_md": false, 00:18:36.382 "write_zeroes": true, 00:18:36.382 "zcopy": false, 00:18:36.382 "get_zone_info": false, 00:18:36.382 "zone_management": false, 00:18:36.382 "zone_append": false, 00:18:36.382 "compare": false, 00:18:36.382 "compare_and_write": false, 00:18:36.382 "abort": false, 00:18:36.382 "seek_hole": false, 00:18:36.382 "seek_data": false, 00:18:36.382 "copy": false, 00:18:36.382 "nvme_iov_md": false 00:18:36.382 }, 00:18:36.382 "memory_domains": [ 00:18:36.382 { 00:18:36.382 "dma_device_id": "system", 00:18:36.382 "dma_device_type": 1 00:18:36.382 }, 00:18:36.382 { 00:18:36.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.382 "dma_device_type": 2 00:18:36.382 }, 00:18:36.382 { 00:18:36.382 "dma_device_id": "system", 00:18:36.382 "dma_device_type": 1 00:18:36.382 }, 00:18:36.382 { 00:18:36.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.382 "dma_device_type": 2 00:18:36.382 } 00:18:36.382 ], 00:18:36.382 "driver_specific": { 00:18:36.382 "raid": { 00:18:36.382 "uuid": "f9cad38b-1499-42f1-a5d9-1dddfc160d01", 00:18:36.382 "strip_size_kb": 0, 00:18:36.382 "state": "online", 00:18:36.382 "raid_level": "raid1", 00:18:36.382 "superblock": true, 00:18:36.382 "num_base_bdevs": 2, 00:18:36.382 "num_base_bdevs_discovered": 2, 00:18:36.382 "num_base_bdevs_operational": 2, 00:18:36.382 "base_bdevs_list": [ 00:18:36.382 { 00:18:36.382 "name": "pt1", 00:18:36.382 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:36.382 "is_configured": true, 00:18:36.382 "data_offset": 256, 00:18:36.382 "data_size": 7936 00:18:36.382 }, 00:18:36.382 { 00:18:36.382 "name": "pt2", 00:18:36.382 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:36.382 "is_configured": true, 00:18:36.382 "data_offset": 256, 00:18:36.382 "data_size": 7936 00:18:36.382 } 00:18:36.382 ] 00:18:36.382 } 00:18:36.382 } 00:18:36.382 }' 00:18:36.382 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:36.383 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:36.383 pt2' 00:18:36.383 16:34:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:36.383 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:18:36.383 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:36.383 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:36.383 16:34:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.383 16:34:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:36.383 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:36.383 16:34:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.383 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:36.383 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:36.383 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:36.383 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:36.383 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:36.383 16:34:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.383 16:34:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:36.383 16:34:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.383 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:36.383 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:36.383 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:36.383 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:36.383 16:34:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.383 16:34:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:36.383 [2024-12-06 16:34:18.125633] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:36.383 16:34:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.383 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' f9cad38b-1499-42f1-a5d9-1dddfc160d01 '!=' f9cad38b-1499-42f1-a5d9-1dddfc160d01 ']' 00:18:36.383 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:36.383 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:36.383 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:18:36.383 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:36.383 16:34:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.383 16:34:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:36.383 [2024-12-06 16:34:18.169312] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:36.383 16:34:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.383 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:36.383 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:36.383 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:36.383 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:36.383 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:36.383 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:36.383 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:36.383 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:36.383 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:36.383 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:36.383 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.383 16:34:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.383 16:34:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:36.383 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.383 16:34:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.640 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:36.640 "name": "raid_bdev1", 00:18:36.640 "uuid": "f9cad38b-1499-42f1-a5d9-1dddfc160d01", 00:18:36.640 "strip_size_kb": 0, 00:18:36.640 "state": "online", 00:18:36.640 "raid_level": "raid1", 00:18:36.640 "superblock": true, 00:18:36.640 "num_base_bdevs": 2, 00:18:36.640 "num_base_bdevs_discovered": 1, 00:18:36.640 "num_base_bdevs_operational": 1, 00:18:36.640 "base_bdevs_list": [ 00:18:36.640 { 00:18:36.640 "name": null, 00:18:36.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.640 "is_configured": false, 00:18:36.640 "data_offset": 0, 00:18:36.640 "data_size": 7936 00:18:36.640 }, 00:18:36.640 { 00:18:36.640 "name": "pt2", 00:18:36.640 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:36.640 "is_configured": true, 00:18:36.640 "data_offset": 256, 00:18:36.640 "data_size": 7936 00:18:36.640 } 00:18:36.640 ] 00:18:36.640 }' 00:18:36.640 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:36.640 16:34:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:36.898 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:36.898 16:34:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.898 16:34:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:36.898 [2024-12-06 16:34:18.580555] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:36.898 [2024-12-06 16:34:18.580643] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:36.898 [2024-12-06 16:34:18.580752] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:36.898 [2024-12-06 16:34:18.580837] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:36.898 [2024-12-06 16:34:18.580902] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:18:36.898 16:34:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.898 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.898 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:36.898 16:34:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.898 16:34:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:36.898 16:34:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.898 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:36.898 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:36.898 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:36.898 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:36.898 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:36.898 16:34:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.898 16:34:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:36.898 16:34:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.898 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:36.898 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:36.898 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:36.898 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:36.898 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:18:36.898 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:36.898 16:34:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.898 16:34:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:36.898 [2024-12-06 16:34:18.660404] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:36.898 [2024-12-06 16:34:18.660524] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:36.898 [2024-12-06 16:34:18.660565] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:36.898 [2024-12-06 16:34:18.660599] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:36.898 [2024-12-06 16:34:18.663070] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:36.898 [2024-12-06 16:34:18.663151] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:36.898 [2024-12-06 16:34:18.663273] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:36.898 [2024-12-06 16:34:18.663341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:36.898 [2024-12-06 16:34:18.663474] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:18:36.898 [2024-12-06 16:34:18.663519] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:36.898 [2024-12-06 16:34:18.663793] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:36.898 [2024-12-06 16:34:18.663972] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:18:36.898 [2024-12-06 16:34:18.664024] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:18:36.898 [2024-12-06 16:34:18.664187] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:36.898 pt2 00:18:36.898 16:34:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.898 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:36.898 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:36.898 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:36.898 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:36.898 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:36.898 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:36.898 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:36.898 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:36.898 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:36.898 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:36.898 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.898 16:34:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.898 16:34:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:36.898 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.898 16:34:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.898 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:36.898 "name": "raid_bdev1", 00:18:36.898 "uuid": "f9cad38b-1499-42f1-a5d9-1dddfc160d01", 00:18:36.898 "strip_size_kb": 0, 00:18:36.898 "state": "online", 00:18:36.898 "raid_level": "raid1", 00:18:36.898 "superblock": true, 00:18:36.898 "num_base_bdevs": 2, 00:18:36.898 "num_base_bdevs_discovered": 1, 00:18:36.898 "num_base_bdevs_operational": 1, 00:18:36.898 "base_bdevs_list": [ 00:18:36.898 { 00:18:36.898 "name": null, 00:18:36.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.898 "is_configured": false, 00:18:36.898 "data_offset": 256, 00:18:36.898 "data_size": 7936 00:18:36.898 }, 00:18:36.898 { 00:18:36.898 "name": "pt2", 00:18:36.898 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:36.898 "is_configured": true, 00:18:36.898 "data_offset": 256, 00:18:36.898 "data_size": 7936 00:18:36.898 } 00:18:36.898 ] 00:18:36.898 }' 00:18:36.899 16:34:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:36.899 16:34:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:37.463 16:34:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:37.463 16:34:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.463 16:34:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:37.463 [2024-12-06 16:34:19.047806] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:37.463 [2024-12-06 16:34:19.047916] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:37.463 [2024-12-06 16:34:19.048018] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:37.463 [2024-12-06 16:34:19.048094] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:37.463 [2024-12-06 16:34:19.048151] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:18:37.463 16:34:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.463 16:34:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.463 16:34:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.463 16:34:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:37.463 16:34:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:37.463 16:34:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.463 16:34:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:37.463 16:34:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:37.463 16:34:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:37.463 16:34:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:37.463 16:34:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.463 16:34:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:37.463 [2024-12-06 16:34:19.111665] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:37.463 [2024-12-06 16:34:19.111784] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:37.463 [2024-12-06 16:34:19.111819] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:37.463 [2024-12-06 16:34:19.111875] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:37.463 [2024-12-06 16:34:19.114254] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:37.463 [2024-12-06 16:34:19.114335] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:37.463 [2024-12-06 16:34:19.114457] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:37.463 [2024-12-06 16:34:19.114545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:37.463 [2024-12-06 16:34:19.114707] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:37.463 [2024-12-06 16:34:19.114773] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:37.463 [2024-12-06 16:34:19.114823] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:18:37.463 [2024-12-06 16:34:19.114902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:37.463 [2024-12-06 16:34:19.115011] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:18:37.463 [2024-12-06 16:34:19.115055] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:37.463 [2024-12-06 16:34:19.115334] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:37.463 [2024-12-06 16:34:19.115509] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:18:37.463 [2024-12-06 16:34:19.115557] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:18:37.463 [2024-12-06 16:34:19.115766] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:37.463 pt1 00:18:37.463 16:34:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.463 16:34:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:37.463 16:34:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:37.463 16:34:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:37.463 16:34:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:37.463 16:34:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:37.463 16:34:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:37.463 16:34:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:37.463 16:34:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:37.463 16:34:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:37.463 16:34:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:37.463 16:34:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:37.463 16:34:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.463 16:34:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.463 16:34:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.463 16:34:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:37.463 16:34:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.463 16:34:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.463 "name": "raid_bdev1", 00:18:37.463 "uuid": "f9cad38b-1499-42f1-a5d9-1dddfc160d01", 00:18:37.463 "strip_size_kb": 0, 00:18:37.463 "state": "online", 00:18:37.463 "raid_level": "raid1", 00:18:37.463 "superblock": true, 00:18:37.463 "num_base_bdevs": 2, 00:18:37.463 "num_base_bdevs_discovered": 1, 00:18:37.463 "num_base_bdevs_operational": 1, 00:18:37.463 "base_bdevs_list": [ 00:18:37.463 { 00:18:37.463 "name": null, 00:18:37.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.463 "is_configured": false, 00:18:37.463 "data_offset": 256, 00:18:37.463 "data_size": 7936 00:18:37.463 }, 00:18:37.463 { 00:18:37.463 "name": "pt2", 00:18:37.463 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:37.463 "is_configured": true, 00:18:37.463 "data_offset": 256, 00:18:37.463 "data_size": 7936 00:18:37.463 } 00:18:37.463 ] 00:18:37.463 }' 00:18:37.463 16:34:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.463 16:34:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:38.030 16:34:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:38.030 16:34:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.030 16:34:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:38.030 16:34:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:38.030 16:34:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.030 16:34:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:38.030 16:34:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:38.030 16:34:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:38.031 16:34:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.031 16:34:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:38.031 [2024-12-06 16:34:19.627087] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:38.031 16:34:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.031 16:34:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' f9cad38b-1499-42f1-a5d9-1dddfc160d01 '!=' f9cad38b-1499-42f1-a5d9-1dddfc160d01 ']' 00:18:38.031 16:34:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 97037 00:18:38.031 16:34:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 97037 ']' 00:18:38.031 16:34:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 97037 00:18:38.031 16:34:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:18:38.031 16:34:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:38.031 16:34:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97037 00:18:38.031 16:34:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:38.031 16:34:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:38.031 16:34:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97037' 00:18:38.031 killing process with pid 97037 00:18:38.031 16:34:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 97037 00:18:38.031 16:34:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 97037 00:18:38.031 [2024-12-06 16:34:19.699954] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:38.031 [2024-12-06 16:34:19.700056] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:38.031 [2024-12-06 16:34:19.700180] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:38.031 [2024-12-06 16:34:19.700193] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:18:38.031 [2024-12-06 16:34:19.722900] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:38.288 16:34:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:18:38.288 00:18:38.288 real 0m4.586s 00:18:38.288 user 0m7.489s 00:18:38.288 sys 0m0.950s 00:18:38.288 ************************************ 00:18:38.288 END TEST raid_superblock_test_4k 00:18:38.288 ************************************ 00:18:38.288 16:34:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:38.288 16:34:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:38.288 16:34:19 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:18:38.288 16:34:19 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:18:38.289 16:34:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:38.289 16:34:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:38.289 16:34:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:38.289 ************************************ 00:18:38.289 START TEST raid_rebuild_test_sb_4k 00:18:38.289 ************************************ 00:18:38.289 16:34:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:18:38.289 16:34:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:38.289 16:34:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:38.289 16:34:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:38.289 16:34:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:38.289 16:34:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:38.289 16:34:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:38.289 16:34:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:38.289 16:34:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:38.289 16:34:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:38.289 16:34:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:38.289 16:34:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:38.289 16:34:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:38.289 16:34:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:38.289 16:34:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:38.289 16:34:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:38.289 16:34:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:38.289 16:34:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:38.289 16:34:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:38.289 16:34:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:38.289 16:34:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:38.289 16:34:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:38.289 16:34:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:38.289 16:34:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:38.289 16:34:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:38.289 16:34:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=97349 00:18:38.289 16:34:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:38.289 16:34:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 97349 00:18:38.289 16:34:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 97349 ']' 00:18:38.289 16:34:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.289 16:34:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:38.289 16:34:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:38.289 16:34:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:38.289 16:34:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:38.289 [2024-12-06 16:34:20.089547] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:18:38.289 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:38.289 Zero copy mechanism will not be used. 00:18:38.289 [2024-12-06 16:34:20.089767] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97349 ] 00:18:38.547 [2024-12-06 16:34:20.243588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.547 [2024-12-06 16:34:20.271740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.547 [2024-12-06 16:34:20.316294] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:38.547 [2024-12-06 16:34:20.316408] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:39.482 16:34:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:39.482 16:34:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:18:39.482 16:34:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:39.482 16:34:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:18:39.482 16:34:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.482 16:34:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:39.482 BaseBdev1_malloc 00:18:39.482 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.482 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:39.482 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.482 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:39.482 [2024-12-06 16:34:21.009126] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:39.482 [2024-12-06 16:34:21.009298] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:39.482 [2024-12-06 16:34:21.009345] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:39.482 [2024-12-06 16:34:21.009425] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:39.483 [2024-12-06 16:34:21.011697] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:39.483 [2024-12-06 16:34:21.011773] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:39.483 BaseBdev1 00:18:39.483 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.483 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:39.483 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:18:39.483 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.483 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:39.483 BaseBdev2_malloc 00:18:39.483 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.483 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:39.483 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.483 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:39.483 [2024-12-06 16:34:21.038492] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:39.483 [2024-12-06 16:34:21.038685] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:39.483 [2024-12-06 16:34:21.038727] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:39.483 [2024-12-06 16:34:21.038759] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:39.483 [2024-12-06 16:34:21.041112] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:39.483 [2024-12-06 16:34:21.041187] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:39.483 BaseBdev2 00:18:39.483 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.483 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:18:39.483 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.483 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:39.483 spare_malloc 00:18:39.483 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.483 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:39.483 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.483 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:39.483 spare_delay 00:18:39.483 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.483 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:39.483 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.483 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:39.483 [2024-12-06 16:34:21.079633] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:39.483 [2024-12-06 16:34:21.079789] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:39.483 [2024-12-06 16:34:21.079836] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:39.483 [2024-12-06 16:34:21.079871] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:39.483 [2024-12-06 16:34:21.082192] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:39.483 [2024-12-06 16:34:21.082297] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:39.483 spare 00:18:39.483 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.483 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:39.483 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.483 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:39.483 [2024-12-06 16:34:21.091659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:39.483 [2024-12-06 16:34:21.093712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:39.483 [2024-12-06 16:34:21.093935] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:18:39.483 [2024-12-06 16:34:21.093989] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:39.483 [2024-12-06 16:34:21.094293] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:18:39.483 [2024-12-06 16:34:21.094522] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:18:39.483 [2024-12-06 16:34:21.094569] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:18:39.483 [2024-12-06 16:34:21.094737] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:39.483 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.483 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:39.483 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:39.483 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:39.483 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:39.483 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:39.483 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:39.483 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.483 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.483 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.483 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.483 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.483 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.483 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.483 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:39.483 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.483 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.483 "name": "raid_bdev1", 00:18:39.483 "uuid": "bd92067d-717f-46e3-8307-9c957d6189d0", 00:18:39.483 "strip_size_kb": 0, 00:18:39.483 "state": "online", 00:18:39.483 "raid_level": "raid1", 00:18:39.483 "superblock": true, 00:18:39.483 "num_base_bdevs": 2, 00:18:39.483 "num_base_bdevs_discovered": 2, 00:18:39.483 "num_base_bdevs_operational": 2, 00:18:39.483 "base_bdevs_list": [ 00:18:39.483 { 00:18:39.483 "name": "BaseBdev1", 00:18:39.483 "uuid": "69650837-c060-563e-ba87-939e5ec1b5b6", 00:18:39.483 "is_configured": true, 00:18:39.483 "data_offset": 256, 00:18:39.483 "data_size": 7936 00:18:39.483 }, 00:18:39.483 { 00:18:39.483 "name": "BaseBdev2", 00:18:39.483 "uuid": "8dfa8098-da01-5153-984e-e26b728c5190", 00:18:39.483 "is_configured": true, 00:18:39.483 "data_offset": 256, 00:18:39.483 "data_size": 7936 00:18:39.483 } 00:18:39.483 ] 00:18:39.483 }' 00:18:39.483 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.483 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:39.742 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:39.742 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:39.742 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.742 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:39.742 [2024-12-06 16:34:21.531236] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:39.742 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.742 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:39.742 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.742 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:39.742 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.742 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:40.007 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.007 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:40.007 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:40.007 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:40.007 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:40.007 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:40.007 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:40.007 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:40.007 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:40.007 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:40.007 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:40.007 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:18:40.007 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:40.007 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:40.007 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:40.007 [2024-12-06 16:34:21.806549] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:40.007 /dev/nbd0 00:18:40.309 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:40.309 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:40.309 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:40.309 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:18:40.309 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:40.309 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:40.309 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:40.309 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:18:40.309 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:40.309 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:40.309 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:40.309 1+0 records in 00:18:40.309 1+0 records out 00:18:40.309 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306309 s, 13.4 MB/s 00:18:40.309 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:40.309 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:18:40.309 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:40.309 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:40.309 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:18:40.309 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:40.309 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:40.309 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:40.309 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:40.309 16:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:18:40.878 7936+0 records in 00:18:40.878 7936+0 records out 00:18:40.878 32505856 bytes (33 MB, 31 MiB) copied, 0.69247 s, 46.9 MB/s 00:18:40.878 16:34:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:40.878 16:34:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:40.878 16:34:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:40.878 16:34:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:40.878 16:34:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:18:40.878 16:34:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:40.878 16:34:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:41.136 16:34:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:41.136 [2024-12-06 16:34:22.794568] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:41.136 16:34:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:41.136 16:34:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:41.136 16:34:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:41.136 16:34:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:41.136 16:34:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:41.136 16:34:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:41.137 16:34:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:41.137 16:34:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:41.137 16:34:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.137 16:34:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:41.137 [2024-12-06 16:34:22.815591] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:41.137 16:34:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.137 16:34:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:41.137 16:34:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:41.137 16:34:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:41.137 16:34:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:41.137 16:34:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:41.137 16:34:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:41.137 16:34:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.137 16:34:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.137 16:34:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.137 16:34:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.137 16:34:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.137 16:34:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.137 16:34:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.137 16:34:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:41.137 16:34:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.137 16:34:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.137 "name": "raid_bdev1", 00:18:41.137 "uuid": "bd92067d-717f-46e3-8307-9c957d6189d0", 00:18:41.137 "strip_size_kb": 0, 00:18:41.137 "state": "online", 00:18:41.137 "raid_level": "raid1", 00:18:41.137 "superblock": true, 00:18:41.137 "num_base_bdevs": 2, 00:18:41.137 "num_base_bdevs_discovered": 1, 00:18:41.137 "num_base_bdevs_operational": 1, 00:18:41.137 "base_bdevs_list": [ 00:18:41.137 { 00:18:41.137 "name": null, 00:18:41.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.137 "is_configured": false, 00:18:41.137 "data_offset": 0, 00:18:41.137 "data_size": 7936 00:18:41.137 }, 00:18:41.137 { 00:18:41.137 "name": "BaseBdev2", 00:18:41.137 "uuid": "8dfa8098-da01-5153-984e-e26b728c5190", 00:18:41.137 "is_configured": true, 00:18:41.137 "data_offset": 256, 00:18:41.137 "data_size": 7936 00:18:41.137 } 00:18:41.137 ] 00:18:41.137 }' 00:18:41.137 16:34:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:41.137 16:34:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:41.395 16:34:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:41.395 16:34:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.395 16:34:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:41.395 [2024-12-06 16:34:23.218984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:41.395 [2024-12-06 16:34:23.224358] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d0c0 00:18:41.395 16:34:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.395 16:34:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:41.395 [2024-12-06 16:34:23.226646] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:42.773 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:42.773 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:42.773 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:42.773 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:42.773 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:42.773 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.773 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.773 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.773 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:42.773 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.773 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:42.773 "name": "raid_bdev1", 00:18:42.773 "uuid": "bd92067d-717f-46e3-8307-9c957d6189d0", 00:18:42.773 "strip_size_kb": 0, 00:18:42.774 "state": "online", 00:18:42.774 "raid_level": "raid1", 00:18:42.774 "superblock": true, 00:18:42.774 "num_base_bdevs": 2, 00:18:42.774 "num_base_bdevs_discovered": 2, 00:18:42.774 "num_base_bdevs_operational": 2, 00:18:42.774 "process": { 00:18:42.774 "type": "rebuild", 00:18:42.774 "target": "spare", 00:18:42.774 "progress": { 00:18:42.774 "blocks": 2560, 00:18:42.774 "percent": 32 00:18:42.774 } 00:18:42.774 }, 00:18:42.774 "base_bdevs_list": [ 00:18:42.774 { 00:18:42.774 "name": "spare", 00:18:42.774 "uuid": "c5f1c138-c459-54d4-8889-7e2ed5797588", 00:18:42.774 "is_configured": true, 00:18:42.774 "data_offset": 256, 00:18:42.774 "data_size": 7936 00:18:42.774 }, 00:18:42.774 { 00:18:42.774 "name": "BaseBdev2", 00:18:42.774 "uuid": "8dfa8098-da01-5153-984e-e26b728c5190", 00:18:42.774 "is_configured": true, 00:18:42.774 "data_offset": 256, 00:18:42.774 "data_size": 7936 00:18:42.774 } 00:18:42.774 ] 00:18:42.774 }' 00:18:42.774 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:42.774 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:42.774 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:42.774 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:42.774 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:42.774 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.774 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:42.774 [2024-12-06 16:34:24.355016] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:42.774 [2024-12-06 16:34:24.432585] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:42.774 [2024-12-06 16:34:24.432710] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:42.774 [2024-12-06 16:34:24.432757] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:42.774 [2024-12-06 16:34:24.432806] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:42.774 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.774 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:42.774 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:42.774 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:42.774 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:42.774 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:42.774 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:42.774 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:42.774 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:42.774 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:42.774 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:42.774 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.774 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.774 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:42.774 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.774 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.774 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:42.774 "name": "raid_bdev1", 00:18:42.774 "uuid": "bd92067d-717f-46e3-8307-9c957d6189d0", 00:18:42.774 "strip_size_kb": 0, 00:18:42.774 "state": "online", 00:18:42.774 "raid_level": "raid1", 00:18:42.774 "superblock": true, 00:18:42.774 "num_base_bdevs": 2, 00:18:42.774 "num_base_bdevs_discovered": 1, 00:18:42.774 "num_base_bdevs_operational": 1, 00:18:42.774 "base_bdevs_list": [ 00:18:42.774 { 00:18:42.774 "name": null, 00:18:42.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.774 "is_configured": false, 00:18:42.774 "data_offset": 0, 00:18:42.774 "data_size": 7936 00:18:42.774 }, 00:18:42.774 { 00:18:42.774 "name": "BaseBdev2", 00:18:42.774 "uuid": "8dfa8098-da01-5153-984e-e26b728c5190", 00:18:42.774 "is_configured": true, 00:18:42.774 "data_offset": 256, 00:18:42.774 "data_size": 7936 00:18:42.774 } 00:18:42.774 ] 00:18:42.774 }' 00:18:42.774 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:42.774 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:43.341 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:43.341 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:43.341 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:43.341 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:43.341 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:43.341 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.341 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.341 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.341 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:43.341 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.341 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:43.341 "name": "raid_bdev1", 00:18:43.341 "uuid": "bd92067d-717f-46e3-8307-9c957d6189d0", 00:18:43.341 "strip_size_kb": 0, 00:18:43.341 "state": "online", 00:18:43.341 "raid_level": "raid1", 00:18:43.341 "superblock": true, 00:18:43.341 "num_base_bdevs": 2, 00:18:43.341 "num_base_bdevs_discovered": 1, 00:18:43.341 "num_base_bdevs_operational": 1, 00:18:43.341 "base_bdevs_list": [ 00:18:43.341 { 00:18:43.341 "name": null, 00:18:43.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.341 "is_configured": false, 00:18:43.341 "data_offset": 0, 00:18:43.341 "data_size": 7936 00:18:43.341 }, 00:18:43.341 { 00:18:43.341 "name": "BaseBdev2", 00:18:43.341 "uuid": "8dfa8098-da01-5153-984e-e26b728c5190", 00:18:43.341 "is_configured": true, 00:18:43.341 "data_offset": 256, 00:18:43.341 "data_size": 7936 00:18:43.341 } 00:18:43.341 ] 00:18:43.341 }' 00:18:43.341 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:43.341 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:43.341 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:43.341 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:43.341 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:43.341 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.341 16:34:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:43.341 [2024-12-06 16:34:24.997104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:43.341 [2024-12-06 16:34:25.002442] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d190 00:18:43.341 16:34:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.341 16:34:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:43.341 [2024-12-06 16:34:25.004686] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:44.273 16:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:44.273 16:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:44.274 16:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:44.274 16:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:44.274 16:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:44.274 16:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.274 16:34:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.274 16:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.274 16:34:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:44.274 16:34:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.274 16:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:44.274 "name": "raid_bdev1", 00:18:44.274 "uuid": "bd92067d-717f-46e3-8307-9c957d6189d0", 00:18:44.274 "strip_size_kb": 0, 00:18:44.274 "state": "online", 00:18:44.274 "raid_level": "raid1", 00:18:44.274 "superblock": true, 00:18:44.274 "num_base_bdevs": 2, 00:18:44.274 "num_base_bdevs_discovered": 2, 00:18:44.274 "num_base_bdevs_operational": 2, 00:18:44.274 "process": { 00:18:44.274 "type": "rebuild", 00:18:44.274 "target": "spare", 00:18:44.274 "progress": { 00:18:44.274 "blocks": 2560, 00:18:44.274 "percent": 32 00:18:44.274 } 00:18:44.274 }, 00:18:44.274 "base_bdevs_list": [ 00:18:44.274 { 00:18:44.274 "name": "spare", 00:18:44.274 "uuid": "c5f1c138-c459-54d4-8889-7e2ed5797588", 00:18:44.274 "is_configured": true, 00:18:44.274 "data_offset": 256, 00:18:44.274 "data_size": 7936 00:18:44.274 }, 00:18:44.274 { 00:18:44.274 "name": "BaseBdev2", 00:18:44.274 "uuid": "8dfa8098-da01-5153-984e-e26b728c5190", 00:18:44.274 "is_configured": true, 00:18:44.274 "data_offset": 256, 00:18:44.274 "data_size": 7936 00:18:44.274 } 00:18:44.274 ] 00:18:44.274 }' 00:18:44.274 16:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:44.274 16:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:44.274 16:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:44.533 16:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:44.533 16:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:44.533 16:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:44.533 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:44.533 16:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:44.533 16:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:44.533 16:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:44.533 16:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=576 00:18:44.533 16:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:44.533 16:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:44.533 16:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:44.533 16:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:44.533 16:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:44.533 16:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:44.533 16:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.533 16:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.533 16:34:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.533 16:34:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:44.533 16:34:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.533 16:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:44.533 "name": "raid_bdev1", 00:18:44.533 "uuid": "bd92067d-717f-46e3-8307-9c957d6189d0", 00:18:44.533 "strip_size_kb": 0, 00:18:44.533 "state": "online", 00:18:44.533 "raid_level": "raid1", 00:18:44.533 "superblock": true, 00:18:44.533 "num_base_bdevs": 2, 00:18:44.533 "num_base_bdevs_discovered": 2, 00:18:44.533 "num_base_bdevs_operational": 2, 00:18:44.533 "process": { 00:18:44.533 "type": "rebuild", 00:18:44.533 "target": "spare", 00:18:44.533 "progress": { 00:18:44.533 "blocks": 2816, 00:18:44.533 "percent": 35 00:18:44.533 } 00:18:44.533 }, 00:18:44.533 "base_bdevs_list": [ 00:18:44.533 { 00:18:44.533 "name": "spare", 00:18:44.533 "uuid": "c5f1c138-c459-54d4-8889-7e2ed5797588", 00:18:44.533 "is_configured": true, 00:18:44.533 "data_offset": 256, 00:18:44.533 "data_size": 7936 00:18:44.533 }, 00:18:44.533 { 00:18:44.533 "name": "BaseBdev2", 00:18:44.533 "uuid": "8dfa8098-da01-5153-984e-e26b728c5190", 00:18:44.533 "is_configured": true, 00:18:44.533 "data_offset": 256, 00:18:44.533 "data_size": 7936 00:18:44.533 } 00:18:44.533 ] 00:18:44.533 }' 00:18:44.533 16:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:44.533 16:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:44.533 16:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:44.533 16:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:44.533 16:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:45.468 16:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:45.468 16:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:45.468 16:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:45.468 16:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:45.468 16:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:45.468 16:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:45.468 16:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.468 16:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.468 16:34:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.468 16:34:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:45.468 16:34:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.726 16:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:45.726 "name": "raid_bdev1", 00:18:45.726 "uuid": "bd92067d-717f-46e3-8307-9c957d6189d0", 00:18:45.726 "strip_size_kb": 0, 00:18:45.726 "state": "online", 00:18:45.726 "raid_level": "raid1", 00:18:45.726 "superblock": true, 00:18:45.726 "num_base_bdevs": 2, 00:18:45.726 "num_base_bdevs_discovered": 2, 00:18:45.726 "num_base_bdevs_operational": 2, 00:18:45.726 "process": { 00:18:45.726 "type": "rebuild", 00:18:45.726 "target": "spare", 00:18:45.726 "progress": { 00:18:45.726 "blocks": 5632, 00:18:45.726 "percent": 70 00:18:45.726 } 00:18:45.726 }, 00:18:45.726 "base_bdevs_list": [ 00:18:45.726 { 00:18:45.726 "name": "spare", 00:18:45.726 "uuid": "c5f1c138-c459-54d4-8889-7e2ed5797588", 00:18:45.726 "is_configured": true, 00:18:45.726 "data_offset": 256, 00:18:45.726 "data_size": 7936 00:18:45.726 }, 00:18:45.726 { 00:18:45.726 "name": "BaseBdev2", 00:18:45.726 "uuid": "8dfa8098-da01-5153-984e-e26b728c5190", 00:18:45.726 "is_configured": true, 00:18:45.726 "data_offset": 256, 00:18:45.726 "data_size": 7936 00:18:45.726 } 00:18:45.726 ] 00:18:45.726 }' 00:18:45.726 16:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:45.726 16:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:45.726 16:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:45.726 16:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:45.726 16:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:46.292 [2024-12-06 16:34:28.118421] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:46.292 [2024-12-06 16:34:28.118585] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:46.292 [2024-12-06 16:34:28.118746] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:46.858 16:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:46.858 16:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:46.858 16:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:46.858 16:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:46.858 16:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:46.858 16:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:46.858 16:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.858 16:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.858 16:34:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.858 16:34:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:46.858 16:34:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.858 16:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:46.858 "name": "raid_bdev1", 00:18:46.858 "uuid": "bd92067d-717f-46e3-8307-9c957d6189d0", 00:18:46.858 "strip_size_kb": 0, 00:18:46.858 "state": "online", 00:18:46.858 "raid_level": "raid1", 00:18:46.858 "superblock": true, 00:18:46.858 "num_base_bdevs": 2, 00:18:46.858 "num_base_bdevs_discovered": 2, 00:18:46.858 "num_base_bdevs_operational": 2, 00:18:46.858 "base_bdevs_list": [ 00:18:46.858 { 00:18:46.858 "name": "spare", 00:18:46.858 "uuid": "c5f1c138-c459-54d4-8889-7e2ed5797588", 00:18:46.858 "is_configured": true, 00:18:46.858 "data_offset": 256, 00:18:46.858 "data_size": 7936 00:18:46.858 }, 00:18:46.858 { 00:18:46.858 "name": "BaseBdev2", 00:18:46.858 "uuid": "8dfa8098-da01-5153-984e-e26b728c5190", 00:18:46.858 "is_configured": true, 00:18:46.858 "data_offset": 256, 00:18:46.858 "data_size": 7936 00:18:46.858 } 00:18:46.858 ] 00:18:46.858 }' 00:18:46.858 16:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:46.858 16:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:46.858 16:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:46.858 16:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:46.858 16:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:18:46.858 16:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:46.858 16:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:46.858 16:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:46.858 16:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:46.858 16:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:46.858 16:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.858 16:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.858 16:34:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.858 16:34:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:46.858 16:34:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.858 16:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:46.858 "name": "raid_bdev1", 00:18:46.858 "uuid": "bd92067d-717f-46e3-8307-9c957d6189d0", 00:18:46.858 "strip_size_kb": 0, 00:18:46.858 "state": "online", 00:18:46.858 "raid_level": "raid1", 00:18:46.858 "superblock": true, 00:18:46.858 "num_base_bdevs": 2, 00:18:46.858 "num_base_bdevs_discovered": 2, 00:18:46.858 "num_base_bdevs_operational": 2, 00:18:46.858 "base_bdevs_list": [ 00:18:46.858 { 00:18:46.858 "name": "spare", 00:18:46.858 "uuid": "c5f1c138-c459-54d4-8889-7e2ed5797588", 00:18:46.858 "is_configured": true, 00:18:46.858 "data_offset": 256, 00:18:46.858 "data_size": 7936 00:18:46.858 }, 00:18:46.858 { 00:18:46.858 "name": "BaseBdev2", 00:18:46.858 "uuid": "8dfa8098-da01-5153-984e-e26b728c5190", 00:18:46.858 "is_configured": true, 00:18:46.858 "data_offset": 256, 00:18:46.858 "data_size": 7936 00:18:46.858 } 00:18:46.858 ] 00:18:46.858 }' 00:18:46.858 16:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:46.858 16:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:46.858 16:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:47.116 16:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:47.116 16:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:47.116 16:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:47.116 16:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:47.116 16:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:47.116 16:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:47.116 16:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:47.116 16:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.116 16:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.116 16:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.116 16:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.116 16:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.116 16:34:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.116 16:34:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:47.116 16:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.116 16:34:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.116 16:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.116 "name": "raid_bdev1", 00:18:47.116 "uuid": "bd92067d-717f-46e3-8307-9c957d6189d0", 00:18:47.116 "strip_size_kb": 0, 00:18:47.116 "state": "online", 00:18:47.116 "raid_level": "raid1", 00:18:47.116 "superblock": true, 00:18:47.116 "num_base_bdevs": 2, 00:18:47.116 "num_base_bdevs_discovered": 2, 00:18:47.116 "num_base_bdevs_operational": 2, 00:18:47.116 "base_bdevs_list": [ 00:18:47.116 { 00:18:47.116 "name": "spare", 00:18:47.116 "uuid": "c5f1c138-c459-54d4-8889-7e2ed5797588", 00:18:47.116 "is_configured": true, 00:18:47.116 "data_offset": 256, 00:18:47.116 "data_size": 7936 00:18:47.116 }, 00:18:47.116 { 00:18:47.116 "name": "BaseBdev2", 00:18:47.116 "uuid": "8dfa8098-da01-5153-984e-e26b728c5190", 00:18:47.116 "is_configured": true, 00:18:47.116 "data_offset": 256, 00:18:47.116 "data_size": 7936 00:18:47.116 } 00:18:47.116 ] 00:18:47.116 }' 00:18:47.116 16:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.116 16:34:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:47.373 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:47.373 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.374 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:47.374 [2024-12-06 16:34:29.134121] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:47.374 [2024-12-06 16:34:29.134253] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:47.374 [2024-12-06 16:34:29.134409] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:47.374 [2024-12-06 16:34:29.134527] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:47.374 [2024-12-06 16:34:29.134588] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:18:47.374 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.374 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.374 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.374 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:47.374 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:18:47.374 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.374 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:47.374 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:47.374 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:47.374 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:47.374 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:47.374 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:47.374 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:47.374 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:47.374 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:47.374 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:18:47.374 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:47.374 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:47.374 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:47.632 /dev/nbd0 00:18:47.632 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:47.632 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:47.632 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:47.632 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:18:47.632 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:47.632 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:47.632 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:47.632 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:18:47.632 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:47.632 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:47.632 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:47.632 1+0 records in 00:18:47.632 1+0 records out 00:18:47.632 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430841 s, 9.5 MB/s 00:18:47.632 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:47.632 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:18:47.632 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:47.890 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:47.890 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:18:47.890 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:47.890 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:47.890 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:47.890 /dev/nbd1 00:18:47.890 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:47.890 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:47.890 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:47.890 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:18:47.890 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:47.890 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:47.890 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:48.149 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:18:48.149 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:48.149 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:48.149 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:48.149 1+0 records in 00:18:48.149 1+0 records out 00:18:48.149 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00042383 s, 9.7 MB/s 00:18:48.149 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:48.149 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:18:48.149 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:48.149 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:48.149 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:18:48.149 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:48.149 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:48.149 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:48.149 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:48.149 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:48.149 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:48.149 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:48.149 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:18:48.149 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:48.149 16:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:48.407 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:48.407 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:48.407 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:48.407 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:48.407 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:48.407 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:48.407 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:48.407 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:48.407 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:48.407 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:48.667 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:48.667 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:48.667 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:48.667 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:48.667 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:48.667 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:48.667 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:48.667 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:48.667 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:48.667 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:48.667 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.667 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:48.667 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.667 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:48.667 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.667 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:48.667 [2024-12-06 16:34:30.328471] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:48.667 [2024-12-06 16:34:30.328593] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:48.667 [2024-12-06 16:34:30.328652] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:48.667 [2024-12-06 16:34:30.328693] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:48.667 [2024-12-06 16:34:30.331230] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:48.667 [2024-12-06 16:34:30.331328] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:48.667 [2024-12-06 16:34:30.331474] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:48.667 [2024-12-06 16:34:30.331547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:48.667 [2024-12-06 16:34:30.331717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:48.667 spare 00:18:48.667 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.667 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:48.667 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.667 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:48.667 [2024-12-06 16:34:30.431672] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:18:48.667 [2024-12-06 16:34:30.431788] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:48.667 [2024-12-06 16:34:30.432264] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c19b0 00:18:48.667 [2024-12-06 16:34:30.432539] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:18:48.667 [2024-12-06 16:34:30.432608] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:18:48.667 [2024-12-06 16:34:30.432855] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:48.667 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.667 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:48.667 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:48.667 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:48.667 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:48.667 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:48.667 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:48.667 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:48.667 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:48.667 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:48.667 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:48.667 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.667 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.667 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.667 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:48.667 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.667 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:48.667 "name": "raid_bdev1", 00:18:48.667 "uuid": "bd92067d-717f-46e3-8307-9c957d6189d0", 00:18:48.667 "strip_size_kb": 0, 00:18:48.667 "state": "online", 00:18:48.667 "raid_level": "raid1", 00:18:48.667 "superblock": true, 00:18:48.667 "num_base_bdevs": 2, 00:18:48.667 "num_base_bdevs_discovered": 2, 00:18:48.667 "num_base_bdevs_operational": 2, 00:18:48.667 "base_bdevs_list": [ 00:18:48.667 { 00:18:48.667 "name": "spare", 00:18:48.667 "uuid": "c5f1c138-c459-54d4-8889-7e2ed5797588", 00:18:48.667 "is_configured": true, 00:18:48.667 "data_offset": 256, 00:18:48.667 "data_size": 7936 00:18:48.667 }, 00:18:48.667 { 00:18:48.667 "name": "BaseBdev2", 00:18:48.667 "uuid": "8dfa8098-da01-5153-984e-e26b728c5190", 00:18:48.667 "is_configured": true, 00:18:48.667 "data_offset": 256, 00:18:48.667 "data_size": 7936 00:18:48.667 } 00:18:48.667 ] 00:18:48.667 }' 00:18:48.667 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:48.667 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:49.234 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:49.234 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:49.234 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:49.234 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:49.234 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:49.234 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.234 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.234 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.234 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:49.234 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.234 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:49.234 "name": "raid_bdev1", 00:18:49.234 "uuid": "bd92067d-717f-46e3-8307-9c957d6189d0", 00:18:49.234 "strip_size_kb": 0, 00:18:49.234 "state": "online", 00:18:49.234 "raid_level": "raid1", 00:18:49.234 "superblock": true, 00:18:49.234 "num_base_bdevs": 2, 00:18:49.234 "num_base_bdevs_discovered": 2, 00:18:49.234 "num_base_bdevs_operational": 2, 00:18:49.234 "base_bdevs_list": [ 00:18:49.234 { 00:18:49.234 "name": "spare", 00:18:49.234 "uuid": "c5f1c138-c459-54d4-8889-7e2ed5797588", 00:18:49.234 "is_configured": true, 00:18:49.234 "data_offset": 256, 00:18:49.234 "data_size": 7936 00:18:49.234 }, 00:18:49.234 { 00:18:49.234 "name": "BaseBdev2", 00:18:49.234 "uuid": "8dfa8098-da01-5153-984e-e26b728c5190", 00:18:49.234 "is_configured": true, 00:18:49.234 "data_offset": 256, 00:18:49.234 "data_size": 7936 00:18:49.234 } 00:18:49.234 ] 00:18:49.234 }' 00:18:49.234 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:49.234 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:49.234 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:49.234 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:49.234 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.234 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.234 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:49.234 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:49.234 16:34:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.234 16:34:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:49.234 16:34:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:49.234 16:34:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.234 16:34:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:49.234 [2024-12-06 16:34:31.007935] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:49.234 16:34:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.234 16:34:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:49.234 16:34:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:49.234 16:34:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:49.234 16:34:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:49.234 16:34:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:49.234 16:34:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:49.234 16:34:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:49.234 16:34:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:49.234 16:34:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:49.234 16:34:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.234 16:34:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.234 16:34:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.234 16:34:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.234 16:34:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:49.234 16:34:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.234 16:34:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:49.234 "name": "raid_bdev1", 00:18:49.234 "uuid": "bd92067d-717f-46e3-8307-9c957d6189d0", 00:18:49.234 "strip_size_kb": 0, 00:18:49.234 "state": "online", 00:18:49.234 "raid_level": "raid1", 00:18:49.234 "superblock": true, 00:18:49.234 "num_base_bdevs": 2, 00:18:49.234 "num_base_bdevs_discovered": 1, 00:18:49.234 "num_base_bdevs_operational": 1, 00:18:49.234 "base_bdevs_list": [ 00:18:49.234 { 00:18:49.234 "name": null, 00:18:49.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.234 "is_configured": false, 00:18:49.234 "data_offset": 0, 00:18:49.234 "data_size": 7936 00:18:49.234 }, 00:18:49.234 { 00:18:49.234 "name": "BaseBdev2", 00:18:49.234 "uuid": "8dfa8098-da01-5153-984e-e26b728c5190", 00:18:49.234 "is_configured": true, 00:18:49.234 "data_offset": 256, 00:18:49.234 "data_size": 7936 00:18:49.234 } 00:18:49.234 ] 00:18:49.234 }' 00:18:49.234 16:34:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:49.234 16:34:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:49.800 16:34:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:49.800 16:34:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.800 16:34:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:49.800 [2024-12-06 16:34:31.399345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:49.800 [2024-12-06 16:34:31.399604] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:49.800 [2024-12-06 16:34:31.399662] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:49.800 [2024-12-06 16:34:31.399722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:49.800 [2024-12-06 16:34:31.405149] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1a80 00:18:49.800 16:34:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.800 16:34:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:49.800 [2024-12-06 16:34:31.407509] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:50.744 16:34:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:50.744 16:34:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:50.744 16:34:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:50.744 16:34:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:50.744 16:34:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:50.744 16:34:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.744 16:34:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.744 16:34:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.744 16:34:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:50.744 16:34:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.744 16:34:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:50.744 "name": "raid_bdev1", 00:18:50.744 "uuid": "bd92067d-717f-46e3-8307-9c957d6189d0", 00:18:50.744 "strip_size_kb": 0, 00:18:50.744 "state": "online", 00:18:50.744 "raid_level": "raid1", 00:18:50.744 "superblock": true, 00:18:50.744 "num_base_bdevs": 2, 00:18:50.744 "num_base_bdevs_discovered": 2, 00:18:50.744 "num_base_bdevs_operational": 2, 00:18:50.744 "process": { 00:18:50.744 "type": "rebuild", 00:18:50.744 "target": "spare", 00:18:50.744 "progress": { 00:18:50.744 "blocks": 2560, 00:18:50.744 "percent": 32 00:18:50.744 } 00:18:50.744 }, 00:18:50.744 "base_bdevs_list": [ 00:18:50.744 { 00:18:50.744 "name": "spare", 00:18:50.744 "uuid": "c5f1c138-c459-54d4-8889-7e2ed5797588", 00:18:50.744 "is_configured": true, 00:18:50.744 "data_offset": 256, 00:18:50.744 "data_size": 7936 00:18:50.744 }, 00:18:50.744 { 00:18:50.744 "name": "BaseBdev2", 00:18:50.744 "uuid": "8dfa8098-da01-5153-984e-e26b728c5190", 00:18:50.744 "is_configured": true, 00:18:50.744 "data_offset": 256, 00:18:50.744 "data_size": 7936 00:18:50.744 } 00:18:50.744 ] 00:18:50.744 }' 00:18:50.744 16:34:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:50.744 16:34:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:50.744 16:34:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:50.744 16:34:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:50.744 16:34:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:50.744 16:34:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.744 16:34:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:50.744 [2024-12-06 16:34:32.536311] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:51.021 [2024-12-06 16:34:32.612608] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:51.021 [2024-12-06 16:34:32.612723] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:51.021 [2024-12-06 16:34:32.612744] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:51.021 [2024-12-06 16:34:32.612752] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:51.021 16:34:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.021 16:34:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:51.021 16:34:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:51.021 16:34:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:51.021 16:34:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:51.021 16:34:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:51.021 16:34:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:51.021 16:34:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:51.021 16:34:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:51.021 16:34:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:51.021 16:34:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:51.021 16:34:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.021 16:34:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.021 16:34:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:51.021 16:34:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.021 16:34:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.021 16:34:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:51.021 "name": "raid_bdev1", 00:18:51.021 "uuid": "bd92067d-717f-46e3-8307-9c957d6189d0", 00:18:51.021 "strip_size_kb": 0, 00:18:51.021 "state": "online", 00:18:51.021 "raid_level": "raid1", 00:18:51.021 "superblock": true, 00:18:51.021 "num_base_bdevs": 2, 00:18:51.021 "num_base_bdevs_discovered": 1, 00:18:51.021 "num_base_bdevs_operational": 1, 00:18:51.021 "base_bdevs_list": [ 00:18:51.021 { 00:18:51.021 "name": null, 00:18:51.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.021 "is_configured": false, 00:18:51.021 "data_offset": 0, 00:18:51.021 "data_size": 7936 00:18:51.021 }, 00:18:51.021 { 00:18:51.021 "name": "BaseBdev2", 00:18:51.021 "uuid": "8dfa8098-da01-5153-984e-e26b728c5190", 00:18:51.021 "is_configured": true, 00:18:51.021 "data_offset": 256, 00:18:51.021 "data_size": 7936 00:18:51.021 } 00:18:51.021 ] 00:18:51.021 }' 00:18:51.021 16:34:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:51.021 16:34:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:51.279 16:34:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:51.279 16:34:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.279 16:34:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:51.279 [2024-12-06 16:34:33.028913] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:51.279 [2024-12-06 16:34:33.029026] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:51.279 [2024-12-06 16:34:33.029057] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:51.279 [2024-12-06 16:34:33.029067] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:51.279 [2024-12-06 16:34:33.029597] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:51.279 [2024-12-06 16:34:33.029619] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:51.279 [2024-12-06 16:34:33.029716] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:51.279 [2024-12-06 16:34:33.029730] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:51.279 [2024-12-06 16:34:33.029746] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:51.279 [2024-12-06 16:34:33.029781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:51.279 spare 00:18:51.279 [2024-12-06 16:34:33.034759] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:18:51.279 16:34:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.279 16:34:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:51.279 [2024-12-06 16:34:33.036784] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:52.213 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:52.213 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:52.213 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:52.213 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:52.213 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:52.213 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.213 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.213 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.213 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:52.471 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.471 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:52.471 "name": "raid_bdev1", 00:18:52.471 "uuid": "bd92067d-717f-46e3-8307-9c957d6189d0", 00:18:52.471 "strip_size_kb": 0, 00:18:52.471 "state": "online", 00:18:52.471 "raid_level": "raid1", 00:18:52.471 "superblock": true, 00:18:52.471 "num_base_bdevs": 2, 00:18:52.471 "num_base_bdevs_discovered": 2, 00:18:52.471 "num_base_bdevs_operational": 2, 00:18:52.471 "process": { 00:18:52.471 "type": "rebuild", 00:18:52.471 "target": "spare", 00:18:52.471 "progress": { 00:18:52.471 "blocks": 2560, 00:18:52.471 "percent": 32 00:18:52.471 } 00:18:52.471 }, 00:18:52.471 "base_bdevs_list": [ 00:18:52.471 { 00:18:52.471 "name": "spare", 00:18:52.471 "uuid": "c5f1c138-c459-54d4-8889-7e2ed5797588", 00:18:52.471 "is_configured": true, 00:18:52.471 "data_offset": 256, 00:18:52.471 "data_size": 7936 00:18:52.471 }, 00:18:52.471 { 00:18:52.471 "name": "BaseBdev2", 00:18:52.471 "uuid": "8dfa8098-da01-5153-984e-e26b728c5190", 00:18:52.471 "is_configured": true, 00:18:52.471 "data_offset": 256, 00:18:52.471 "data_size": 7936 00:18:52.471 } 00:18:52.471 ] 00:18:52.471 }' 00:18:52.471 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:52.471 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:52.471 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:52.471 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:52.471 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:52.471 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.471 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:52.471 [2024-12-06 16:34:34.153395] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:52.471 [2024-12-06 16:34:34.241690] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:52.471 [2024-12-06 16:34:34.241772] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:52.471 [2024-12-06 16:34:34.241789] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:52.471 [2024-12-06 16:34:34.241800] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:52.471 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.471 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:52.471 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:52.471 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:52.471 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:52.471 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:52.471 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:52.471 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:52.471 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:52.471 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:52.471 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:52.471 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.471 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.471 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.471 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:52.471 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.471 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:52.471 "name": "raid_bdev1", 00:18:52.471 "uuid": "bd92067d-717f-46e3-8307-9c957d6189d0", 00:18:52.471 "strip_size_kb": 0, 00:18:52.471 "state": "online", 00:18:52.471 "raid_level": "raid1", 00:18:52.471 "superblock": true, 00:18:52.471 "num_base_bdevs": 2, 00:18:52.471 "num_base_bdevs_discovered": 1, 00:18:52.471 "num_base_bdevs_operational": 1, 00:18:52.471 "base_bdevs_list": [ 00:18:52.471 { 00:18:52.471 "name": null, 00:18:52.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.471 "is_configured": false, 00:18:52.471 "data_offset": 0, 00:18:52.471 "data_size": 7936 00:18:52.471 }, 00:18:52.471 { 00:18:52.471 "name": "BaseBdev2", 00:18:52.471 "uuid": "8dfa8098-da01-5153-984e-e26b728c5190", 00:18:52.471 "is_configured": true, 00:18:52.471 "data_offset": 256, 00:18:52.471 "data_size": 7936 00:18:52.471 } 00:18:52.471 ] 00:18:52.471 }' 00:18:52.471 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:52.471 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:53.036 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:53.036 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:53.036 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:53.036 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:53.036 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:53.036 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.036 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.036 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.036 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:53.036 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.036 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:53.036 "name": "raid_bdev1", 00:18:53.036 "uuid": "bd92067d-717f-46e3-8307-9c957d6189d0", 00:18:53.036 "strip_size_kb": 0, 00:18:53.036 "state": "online", 00:18:53.036 "raid_level": "raid1", 00:18:53.036 "superblock": true, 00:18:53.036 "num_base_bdevs": 2, 00:18:53.036 "num_base_bdevs_discovered": 1, 00:18:53.036 "num_base_bdevs_operational": 1, 00:18:53.036 "base_bdevs_list": [ 00:18:53.036 { 00:18:53.036 "name": null, 00:18:53.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.036 "is_configured": false, 00:18:53.036 "data_offset": 0, 00:18:53.036 "data_size": 7936 00:18:53.036 }, 00:18:53.036 { 00:18:53.036 "name": "BaseBdev2", 00:18:53.036 "uuid": "8dfa8098-da01-5153-984e-e26b728c5190", 00:18:53.036 "is_configured": true, 00:18:53.036 "data_offset": 256, 00:18:53.036 "data_size": 7936 00:18:53.036 } 00:18:53.036 ] 00:18:53.036 }' 00:18:53.036 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:53.036 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:53.036 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:53.036 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:53.036 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:53.036 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.036 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:53.036 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.036 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:53.036 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.036 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:53.036 [2024-12-06 16:34:34.785811] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:53.036 [2024-12-06 16:34:34.785874] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:53.037 [2024-12-06 16:34:34.785895] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:53.037 [2024-12-06 16:34:34.785907] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:53.037 [2024-12-06 16:34:34.786379] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:53.037 [2024-12-06 16:34:34.786412] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:53.037 [2024-12-06 16:34:34.786495] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:53.037 [2024-12-06 16:34:34.786523] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:53.037 [2024-12-06 16:34:34.786533] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:53.037 [2024-12-06 16:34:34.786547] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:53.037 BaseBdev1 00:18:53.037 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.037 16:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:53.967 16:34:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:53.967 16:34:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:53.967 16:34:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:53.967 16:34:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:53.967 16:34:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:53.967 16:34:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:53.967 16:34:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:53.967 16:34:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:53.967 16:34:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:53.967 16:34:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:53.967 16:34:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.967 16:34:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.967 16:34:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:54.225 16:34:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.225 16:34:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.225 16:34:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.225 "name": "raid_bdev1", 00:18:54.225 "uuid": "bd92067d-717f-46e3-8307-9c957d6189d0", 00:18:54.226 "strip_size_kb": 0, 00:18:54.226 "state": "online", 00:18:54.226 "raid_level": "raid1", 00:18:54.226 "superblock": true, 00:18:54.226 "num_base_bdevs": 2, 00:18:54.226 "num_base_bdevs_discovered": 1, 00:18:54.226 "num_base_bdevs_operational": 1, 00:18:54.226 "base_bdevs_list": [ 00:18:54.226 { 00:18:54.226 "name": null, 00:18:54.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.226 "is_configured": false, 00:18:54.226 "data_offset": 0, 00:18:54.226 "data_size": 7936 00:18:54.226 }, 00:18:54.226 { 00:18:54.226 "name": "BaseBdev2", 00:18:54.226 "uuid": "8dfa8098-da01-5153-984e-e26b728c5190", 00:18:54.226 "is_configured": true, 00:18:54.226 "data_offset": 256, 00:18:54.226 "data_size": 7936 00:18:54.226 } 00:18:54.226 ] 00:18:54.226 }' 00:18:54.226 16:34:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.226 16:34:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:54.485 16:34:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:54.485 16:34:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:54.485 16:34:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:54.485 16:34:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:54.485 16:34:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:54.485 16:34:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.485 16:34:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.485 16:34:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.485 16:34:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:54.485 16:34:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.485 16:34:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:54.485 "name": "raid_bdev1", 00:18:54.485 "uuid": "bd92067d-717f-46e3-8307-9c957d6189d0", 00:18:54.485 "strip_size_kb": 0, 00:18:54.485 "state": "online", 00:18:54.485 "raid_level": "raid1", 00:18:54.485 "superblock": true, 00:18:54.485 "num_base_bdevs": 2, 00:18:54.485 "num_base_bdevs_discovered": 1, 00:18:54.485 "num_base_bdevs_operational": 1, 00:18:54.485 "base_bdevs_list": [ 00:18:54.485 { 00:18:54.485 "name": null, 00:18:54.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.485 "is_configured": false, 00:18:54.485 "data_offset": 0, 00:18:54.485 "data_size": 7936 00:18:54.485 }, 00:18:54.485 { 00:18:54.485 "name": "BaseBdev2", 00:18:54.485 "uuid": "8dfa8098-da01-5153-984e-e26b728c5190", 00:18:54.485 "is_configured": true, 00:18:54.485 "data_offset": 256, 00:18:54.485 "data_size": 7936 00:18:54.485 } 00:18:54.485 ] 00:18:54.485 }' 00:18:54.485 16:34:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:54.485 16:34:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:54.485 16:34:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:54.744 16:34:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:54.744 16:34:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:54.744 16:34:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:18:54.744 16:34:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:54.744 16:34:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:54.744 16:34:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:54.744 16:34:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:54.744 16:34:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:54.744 16:34:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:54.744 16:34:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.744 16:34:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:54.744 [2024-12-06 16:34:36.347304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:54.744 [2024-12-06 16:34:36.347494] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:54.744 [2024-12-06 16:34:36.347514] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:54.744 request: 00:18:54.744 { 00:18:54.744 "base_bdev": "BaseBdev1", 00:18:54.744 "raid_bdev": "raid_bdev1", 00:18:54.744 "method": "bdev_raid_add_base_bdev", 00:18:54.744 "req_id": 1 00:18:54.744 } 00:18:54.744 Got JSON-RPC error response 00:18:54.744 response: 00:18:54.744 { 00:18:54.744 "code": -22, 00:18:54.744 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:54.744 } 00:18:54.744 16:34:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:54.744 16:34:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:18:54.744 16:34:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:54.744 16:34:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:54.744 16:34:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:54.744 16:34:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:55.680 16:34:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:55.680 16:34:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:55.680 16:34:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:55.680 16:34:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:55.680 16:34:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:55.680 16:34:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:55.680 16:34:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:55.680 16:34:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:55.680 16:34:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:55.680 16:34:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:55.680 16:34:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.680 16:34:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.680 16:34:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.680 16:34:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:55.680 16:34:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.680 16:34:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.680 "name": "raid_bdev1", 00:18:55.680 "uuid": "bd92067d-717f-46e3-8307-9c957d6189d0", 00:18:55.680 "strip_size_kb": 0, 00:18:55.680 "state": "online", 00:18:55.680 "raid_level": "raid1", 00:18:55.680 "superblock": true, 00:18:55.680 "num_base_bdevs": 2, 00:18:55.680 "num_base_bdevs_discovered": 1, 00:18:55.680 "num_base_bdevs_operational": 1, 00:18:55.680 "base_bdevs_list": [ 00:18:55.680 { 00:18:55.680 "name": null, 00:18:55.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.680 "is_configured": false, 00:18:55.680 "data_offset": 0, 00:18:55.680 "data_size": 7936 00:18:55.680 }, 00:18:55.680 { 00:18:55.680 "name": "BaseBdev2", 00:18:55.680 "uuid": "8dfa8098-da01-5153-984e-e26b728c5190", 00:18:55.680 "is_configured": true, 00:18:55.680 "data_offset": 256, 00:18:55.680 "data_size": 7936 00:18:55.680 } 00:18:55.680 ] 00:18:55.680 }' 00:18:55.680 16:34:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.680 16:34:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:55.938 16:34:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:55.938 16:34:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:55.938 16:34:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:55.938 16:34:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:55.938 16:34:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:55.938 16:34:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.938 16:34:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.938 16:34:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.938 16:34:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:55.938 16:34:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.196 16:34:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:56.196 "name": "raid_bdev1", 00:18:56.196 "uuid": "bd92067d-717f-46e3-8307-9c957d6189d0", 00:18:56.196 "strip_size_kb": 0, 00:18:56.196 "state": "online", 00:18:56.196 "raid_level": "raid1", 00:18:56.196 "superblock": true, 00:18:56.196 "num_base_bdevs": 2, 00:18:56.196 "num_base_bdevs_discovered": 1, 00:18:56.196 "num_base_bdevs_operational": 1, 00:18:56.196 "base_bdevs_list": [ 00:18:56.196 { 00:18:56.196 "name": null, 00:18:56.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.196 "is_configured": false, 00:18:56.196 "data_offset": 0, 00:18:56.196 "data_size": 7936 00:18:56.196 }, 00:18:56.196 { 00:18:56.196 "name": "BaseBdev2", 00:18:56.196 "uuid": "8dfa8098-da01-5153-984e-e26b728c5190", 00:18:56.196 "is_configured": true, 00:18:56.196 "data_offset": 256, 00:18:56.196 "data_size": 7936 00:18:56.196 } 00:18:56.196 ] 00:18:56.196 }' 00:18:56.196 16:34:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:56.196 16:34:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:56.196 16:34:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:56.196 16:34:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:56.196 16:34:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 97349 00:18:56.196 16:34:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 97349 ']' 00:18:56.196 16:34:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 97349 00:18:56.196 16:34:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:18:56.196 16:34:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:56.196 16:34:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97349 00:18:56.196 16:34:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:56.196 16:34:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:56.196 killing process with pid 97349 00:18:56.196 16:34:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97349' 00:18:56.196 16:34:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 97349 00:18:56.196 Received shutdown signal, test time was about 60.000000 seconds 00:18:56.196 00:18:56.196 Latency(us) 00:18:56.196 [2024-12-06T16:34:38.035Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:56.196 [2024-12-06T16:34:38.035Z] =================================================================================================================== 00:18:56.196 [2024-12-06T16:34:38.035Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:56.196 [2024-12-06 16:34:37.904763] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:56.196 16:34:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 97349 00:18:56.196 [2024-12-06 16:34:37.904901] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:56.196 [2024-12-06 16:34:37.904966] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:56.196 [2024-12-06 16:34:37.904981] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:18:56.196 [2024-12-06 16:34:37.937523] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:56.454 16:34:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:18:56.454 00:18:56.454 real 0m18.148s 00:18:56.454 user 0m23.856s 00:18:56.454 sys 0m2.479s 00:18:56.454 16:34:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:56.454 16:34:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:56.454 ************************************ 00:18:56.454 END TEST raid_rebuild_test_sb_4k 00:18:56.454 ************************************ 00:18:56.454 16:34:38 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:18:56.454 16:34:38 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:18:56.454 16:34:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:56.454 16:34:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:56.454 16:34:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:56.454 ************************************ 00:18:56.454 START TEST raid_state_function_test_sb_md_separate 00:18:56.455 ************************************ 00:18:56.455 16:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:18:56.455 16:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:56.455 16:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:56.455 16:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:56.455 16:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:56.455 16:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:56.455 16:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:56.455 16:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:56.455 16:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:56.455 16:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:56.455 16:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:56.455 16:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:56.455 16:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:56.455 16:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:56.455 16:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:56.455 16:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:56.455 16:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:56.455 16:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:56.455 16:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:56.455 16:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:56.455 16:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:56.455 16:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:56.455 16:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:56.455 16:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=98031 00:18:56.455 16:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:56.455 16:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 98031' 00:18:56.455 Process raid pid: 98031 00:18:56.455 16:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 98031 00:18:56.455 16:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 98031 ']' 00:18:56.455 16:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.455 16:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:56.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:56.455 16:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.455 16:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:56.455 16:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:56.714 [2024-12-06 16:34:38.298635] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:18:56.714 [2024-12-06 16:34:38.298762] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:56.714 [2024-12-06 16:34:38.477257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.714 [2024-12-06 16:34:38.505291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.714 [2024-12-06 16:34:38.550567] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:56.714 [2024-12-06 16:34:38.550629] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:57.649 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:57.649 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:18:57.649 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:57.649 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.649 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:57.649 [2024-12-06 16:34:39.294509] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:57.649 [2024-12-06 16:34:39.294571] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:57.649 [2024-12-06 16:34:39.294582] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:57.649 [2024-12-06 16:34:39.294611] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:57.649 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.649 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:57.649 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:57.649 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:57.649 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:57.649 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:57.649 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:57.649 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:57.649 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:57.649 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:57.649 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:57.649 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.649 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.649 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:57.649 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:57.649 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.649 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:57.649 "name": "Existed_Raid", 00:18:57.649 "uuid": "07655b50-9df1-488a-b484-b3ff510896b4", 00:18:57.649 "strip_size_kb": 0, 00:18:57.649 "state": "configuring", 00:18:57.649 "raid_level": "raid1", 00:18:57.649 "superblock": true, 00:18:57.649 "num_base_bdevs": 2, 00:18:57.649 "num_base_bdevs_discovered": 0, 00:18:57.649 "num_base_bdevs_operational": 2, 00:18:57.649 "base_bdevs_list": [ 00:18:57.649 { 00:18:57.649 "name": "BaseBdev1", 00:18:57.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.649 "is_configured": false, 00:18:57.649 "data_offset": 0, 00:18:57.649 "data_size": 0 00:18:57.649 }, 00:18:57.649 { 00:18:57.649 "name": "BaseBdev2", 00:18:57.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.649 "is_configured": false, 00:18:57.649 "data_offset": 0, 00:18:57.649 "data_size": 0 00:18:57.649 } 00:18:57.649 ] 00:18:57.649 }' 00:18:57.650 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:57.650 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:57.908 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:57.908 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.908 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:57.908 [2024-12-06 16:34:39.665796] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:57.908 [2024-12-06 16:34:39.665856] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:18:57.908 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.908 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:57.908 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.908 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:57.908 [2024-12-06 16:34:39.673790] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:57.908 [2024-12-06 16:34:39.673840] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:57.908 [2024-12-06 16:34:39.673850] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:57.908 [2024-12-06 16:34:39.673876] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:57.908 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.908 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:18:57.909 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.909 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:57.909 [2024-12-06 16:34:39.691755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:57.909 BaseBdev1 00:18:57.909 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.909 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:57.909 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:57.909 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:57.909 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:18:57.909 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:57.909 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:57.909 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:57.909 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.909 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:57.909 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.909 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:57.909 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.909 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:57.909 [ 00:18:57.909 { 00:18:57.909 "name": "BaseBdev1", 00:18:57.909 "aliases": [ 00:18:57.909 "be3d8185-88a4-42bc-bfa2-919a527e3cdc" 00:18:57.909 ], 00:18:57.909 "product_name": "Malloc disk", 00:18:57.909 "block_size": 4096, 00:18:57.909 "num_blocks": 8192, 00:18:57.909 "uuid": "be3d8185-88a4-42bc-bfa2-919a527e3cdc", 00:18:57.909 "md_size": 32, 00:18:57.909 "md_interleave": false, 00:18:57.909 "dif_type": 0, 00:18:57.909 "assigned_rate_limits": { 00:18:57.909 "rw_ios_per_sec": 0, 00:18:57.909 "rw_mbytes_per_sec": 0, 00:18:57.909 "r_mbytes_per_sec": 0, 00:18:57.909 "w_mbytes_per_sec": 0 00:18:57.909 }, 00:18:57.909 "claimed": true, 00:18:57.909 "claim_type": "exclusive_write", 00:18:57.909 "zoned": false, 00:18:57.909 "supported_io_types": { 00:18:57.909 "read": true, 00:18:57.909 "write": true, 00:18:57.909 "unmap": true, 00:18:57.909 "flush": true, 00:18:57.909 "reset": true, 00:18:57.909 "nvme_admin": false, 00:18:57.909 "nvme_io": false, 00:18:57.909 "nvme_io_md": false, 00:18:57.909 "write_zeroes": true, 00:18:57.909 "zcopy": true, 00:18:57.909 "get_zone_info": false, 00:18:57.909 "zone_management": false, 00:18:57.909 "zone_append": false, 00:18:57.909 "compare": false, 00:18:57.909 "compare_and_write": false, 00:18:57.909 "abort": true, 00:18:57.909 "seek_hole": false, 00:18:57.909 "seek_data": false, 00:18:57.909 "copy": true, 00:18:57.909 "nvme_iov_md": false 00:18:57.909 }, 00:18:57.909 "memory_domains": [ 00:18:57.909 { 00:18:57.909 "dma_device_id": "system", 00:18:57.909 "dma_device_type": 1 00:18:57.909 }, 00:18:57.909 { 00:18:57.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:57.909 "dma_device_type": 2 00:18:57.909 } 00:18:57.909 ], 00:18:57.909 "driver_specific": {} 00:18:57.909 } 00:18:57.909 ] 00:18:57.909 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.909 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:18:57.909 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:57.909 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:57.909 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:57.909 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:57.909 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:57.909 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:57.909 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:57.909 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:57.909 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:57.909 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:57.909 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.909 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.909 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:57.909 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:57.909 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.167 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:58.167 "name": "Existed_Raid", 00:18:58.167 "uuid": "6b891da5-8e91-4cc3-8feb-0c96252247c8", 00:18:58.167 "strip_size_kb": 0, 00:18:58.167 "state": "configuring", 00:18:58.167 "raid_level": "raid1", 00:18:58.167 "superblock": true, 00:18:58.167 "num_base_bdevs": 2, 00:18:58.167 "num_base_bdevs_discovered": 1, 00:18:58.167 "num_base_bdevs_operational": 2, 00:18:58.167 "base_bdevs_list": [ 00:18:58.167 { 00:18:58.167 "name": "BaseBdev1", 00:18:58.167 "uuid": "be3d8185-88a4-42bc-bfa2-919a527e3cdc", 00:18:58.167 "is_configured": true, 00:18:58.167 "data_offset": 256, 00:18:58.167 "data_size": 7936 00:18:58.167 }, 00:18:58.167 { 00:18:58.167 "name": "BaseBdev2", 00:18:58.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.167 "is_configured": false, 00:18:58.167 "data_offset": 0, 00:18:58.167 "data_size": 0 00:18:58.167 } 00:18:58.167 ] 00:18:58.167 }' 00:18:58.167 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:58.167 16:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:58.425 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:58.425 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.426 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:58.426 [2024-12-06 16:34:40.115175] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:58.426 [2024-12-06 16:34:40.115253] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:18:58.426 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.426 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:58.426 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.426 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:58.426 [2024-12-06 16:34:40.123168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:58.426 [2024-12-06 16:34:40.125420] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:58.426 [2024-12-06 16:34:40.125462] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:58.426 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.426 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:58.426 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:58.426 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:58.426 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:58.426 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:58.426 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:58.426 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:58.426 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:58.426 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:58.426 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:58.426 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:58.426 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:58.426 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:58.426 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.426 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.426 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:58.426 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.426 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:58.426 "name": "Existed_Raid", 00:18:58.426 "uuid": "2664cd97-78df-412d-8d04-16fe9f878675", 00:18:58.426 "strip_size_kb": 0, 00:18:58.426 "state": "configuring", 00:18:58.426 "raid_level": "raid1", 00:18:58.426 "superblock": true, 00:18:58.426 "num_base_bdevs": 2, 00:18:58.426 "num_base_bdevs_discovered": 1, 00:18:58.426 "num_base_bdevs_operational": 2, 00:18:58.426 "base_bdevs_list": [ 00:18:58.426 { 00:18:58.426 "name": "BaseBdev1", 00:18:58.426 "uuid": "be3d8185-88a4-42bc-bfa2-919a527e3cdc", 00:18:58.426 "is_configured": true, 00:18:58.426 "data_offset": 256, 00:18:58.426 "data_size": 7936 00:18:58.426 }, 00:18:58.426 { 00:18:58.426 "name": "BaseBdev2", 00:18:58.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.426 "is_configured": false, 00:18:58.426 "data_offset": 0, 00:18:58.426 "data_size": 0 00:18:58.426 } 00:18:58.426 ] 00:18:58.426 }' 00:18:58.426 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:58.426 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:58.684 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:18:58.684 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.684 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:58.943 [2024-12-06 16:34:40.522650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:58.943 [2024-12-06 16:34:40.522862] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:18:58.943 [2024-12-06 16:34:40.522879] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:58.943 [2024-12-06 16:34:40.522981] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:18:58.943 BaseBdev2 00:18:58.943 [2024-12-06 16:34:40.523123] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:18:58.943 [2024-12-06 16:34:40.523142] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:18:58.943 [2024-12-06 16:34:40.523267] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:58.943 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.943 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:58.943 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:58.943 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:58.943 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:18:58.943 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:58.943 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:58.943 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:58.943 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.943 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:58.943 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.943 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:58.943 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.943 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:58.943 [ 00:18:58.943 { 00:18:58.943 "name": "BaseBdev2", 00:18:58.943 "aliases": [ 00:18:58.943 "be86aed2-78c6-47aa-982d-358d7ea7c8ce" 00:18:58.943 ], 00:18:58.943 "product_name": "Malloc disk", 00:18:58.943 "block_size": 4096, 00:18:58.943 "num_blocks": 8192, 00:18:58.943 "uuid": "be86aed2-78c6-47aa-982d-358d7ea7c8ce", 00:18:58.943 "md_size": 32, 00:18:58.943 "md_interleave": false, 00:18:58.943 "dif_type": 0, 00:18:58.943 "assigned_rate_limits": { 00:18:58.943 "rw_ios_per_sec": 0, 00:18:58.943 "rw_mbytes_per_sec": 0, 00:18:58.943 "r_mbytes_per_sec": 0, 00:18:58.943 "w_mbytes_per_sec": 0 00:18:58.943 }, 00:18:58.943 "claimed": true, 00:18:58.943 "claim_type": "exclusive_write", 00:18:58.943 "zoned": false, 00:18:58.943 "supported_io_types": { 00:18:58.943 "read": true, 00:18:58.943 "write": true, 00:18:58.943 "unmap": true, 00:18:58.943 "flush": true, 00:18:58.943 "reset": true, 00:18:58.943 "nvme_admin": false, 00:18:58.943 "nvme_io": false, 00:18:58.943 "nvme_io_md": false, 00:18:58.943 "write_zeroes": true, 00:18:58.943 "zcopy": true, 00:18:58.943 "get_zone_info": false, 00:18:58.943 "zone_management": false, 00:18:58.943 "zone_append": false, 00:18:58.943 "compare": false, 00:18:58.943 "compare_and_write": false, 00:18:58.943 "abort": true, 00:18:58.943 "seek_hole": false, 00:18:58.943 "seek_data": false, 00:18:58.943 "copy": true, 00:18:58.943 "nvme_iov_md": false 00:18:58.943 }, 00:18:58.943 "memory_domains": [ 00:18:58.943 { 00:18:58.943 "dma_device_id": "system", 00:18:58.943 "dma_device_type": 1 00:18:58.943 }, 00:18:58.943 { 00:18:58.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:58.943 "dma_device_type": 2 00:18:58.943 } 00:18:58.943 ], 00:18:58.943 "driver_specific": {} 00:18:58.943 } 00:18:58.943 ] 00:18:58.943 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.943 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:18:58.943 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:58.943 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:58.943 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:58.943 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:58.943 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:58.943 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:58.943 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:58.943 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:58.943 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:58.943 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:58.943 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:58.943 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:58.943 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.943 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.943 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:58.943 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:58.943 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.943 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:58.943 "name": "Existed_Raid", 00:18:58.943 "uuid": "2664cd97-78df-412d-8d04-16fe9f878675", 00:18:58.943 "strip_size_kb": 0, 00:18:58.943 "state": "online", 00:18:58.943 "raid_level": "raid1", 00:18:58.943 "superblock": true, 00:18:58.943 "num_base_bdevs": 2, 00:18:58.943 "num_base_bdevs_discovered": 2, 00:18:58.944 "num_base_bdevs_operational": 2, 00:18:58.944 "base_bdevs_list": [ 00:18:58.944 { 00:18:58.944 "name": "BaseBdev1", 00:18:58.944 "uuid": "be3d8185-88a4-42bc-bfa2-919a527e3cdc", 00:18:58.944 "is_configured": true, 00:18:58.944 "data_offset": 256, 00:18:58.944 "data_size": 7936 00:18:58.944 }, 00:18:58.944 { 00:18:58.944 "name": "BaseBdev2", 00:18:58.944 "uuid": "be86aed2-78c6-47aa-982d-358d7ea7c8ce", 00:18:58.944 "is_configured": true, 00:18:58.944 "data_offset": 256, 00:18:58.944 "data_size": 7936 00:18:58.944 } 00:18:58.944 ] 00:18:58.944 }' 00:18:58.944 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:58.944 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:59.220 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:59.220 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:59.220 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:59.220 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:59.220 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:59.220 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:59.220 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:59.220 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:59.220 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.220 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:59.220 [2024-12-06 16:34:40.922436] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:59.220 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.220 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:59.220 "name": "Existed_Raid", 00:18:59.220 "aliases": [ 00:18:59.220 "2664cd97-78df-412d-8d04-16fe9f878675" 00:18:59.220 ], 00:18:59.220 "product_name": "Raid Volume", 00:18:59.220 "block_size": 4096, 00:18:59.220 "num_blocks": 7936, 00:18:59.220 "uuid": "2664cd97-78df-412d-8d04-16fe9f878675", 00:18:59.220 "md_size": 32, 00:18:59.220 "md_interleave": false, 00:18:59.220 "dif_type": 0, 00:18:59.220 "assigned_rate_limits": { 00:18:59.220 "rw_ios_per_sec": 0, 00:18:59.220 "rw_mbytes_per_sec": 0, 00:18:59.220 "r_mbytes_per_sec": 0, 00:18:59.220 "w_mbytes_per_sec": 0 00:18:59.220 }, 00:18:59.220 "claimed": false, 00:18:59.220 "zoned": false, 00:18:59.220 "supported_io_types": { 00:18:59.220 "read": true, 00:18:59.220 "write": true, 00:18:59.220 "unmap": false, 00:18:59.220 "flush": false, 00:18:59.220 "reset": true, 00:18:59.220 "nvme_admin": false, 00:18:59.220 "nvme_io": false, 00:18:59.220 "nvme_io_md": false, 00:18:59.220 "write_zeroes": true, 00:18:59.220 "zcopy": false, 00:18:59.220 "get_zone_info": false, 00:18:59.220 "zone_management": false, 00:18:59.220 "zone_append": false, 00:18:59.220 "compare": false, 00:18:59.220 "compare_and_write": false, 00:18:59.220 "abort": false, 00:18:59.220 "seek_hole": false, 00:18:59.220 "seek_data": false, 00:18:59.220 "copy": false, 00:18:59.220 "nvme_iov_md": false 00:18:59.220 }, 00:18:59.220 "memory_domains": [ 00:18:59.220 { 00:18:59.220 "dma_device_id": "system", 00:18:59.220 "dma_device_type": 1 00:18:59.220 }, 00:18:59.220 { 00:18:59.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:59.220 "dma_device_type": 2 00:18:59.220 }, 00:18:59.220 { 00:18:59.220 "dma_device_id": "system", 00:18:59.220 "dma_device_type": 1 00:18:59.220 }, 00:18:59.220 { 00:18:59.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:59.220 "dma_device_type": 2 00:18:59.220 } 00:18:59.220 ], 00:18:59.220 "driver_specific": { 00:18:59.220 "raid": { 00:18:59.220 "uuid": "2664cd97-78df-412d-8d04-16fe9f878675", 00:18:59.220 "strip_size_kb": 0, 00:18:59.220 "state": "online", 00:18:59.220 "raid_level": "raid1", 00:18:59.220 "superblock": true, 00:18:59.220 "num_base_bdevs": 2, 00:18:59.220 "num_base_bdevs_discovered": 2, 00:18:59.220 "num_base_bdevs_operational": 2, 00:18:59.220 "base_bdevs_list": [ 00:18:59.220 { 00:18:59.220 "name": "BaseBdev1", 00:18:59.220 "uuid": "be3d8185-88a4-42bc-bfa2-919a527e3cdc", 00:18:59.220 "is_configured": true, 00:18:59.220 "data_offset": 256, 00:18:59.220 "data_size": 7936 00:18:59.220 }, 00:18:59.220 { 00:18:59.220 "name": "BaseBdev2", 00:18:59.220 "uuid": "be86aed2-78c6-47aa-982d-358d7ea7c8ce", 00:18:59.220 "is_configured": true, 00:18:59.220 "data_offset": 256, 00:18:59.220 "data_size": 7936 00:18:59.220 } 00:18:59.220 ] 00:18:59.220 } 00:18:59.220 } 00:18:59.220 }' 00:18:59.220 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:59.220 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:59.220 BaseBdev2' 00:18:59.220 16:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:59.220 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:59.220 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:59.479 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:59.479 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:59.479 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.479 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:59.479 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.479 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:59.479 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:59.479 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:59.479 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:59.479 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:59.479 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.479 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:59.479 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.479 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:59.479 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:59.479 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:59.479 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.479 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:59.479 [2024-12-06 16:34:41.133780] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:59.479 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.479 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:59.479 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:59.479 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:59.479 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:18:59.479 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:59.479 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:59.479 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:59.479 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:59.479 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:59.479 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:59.479 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:59.479 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:59.479 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:59.479 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:59.479 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:59.479 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.479 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.479 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:59.479 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:59.479 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.479 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:59.479 "name": "Existed_Raid", 00:18:59.479 "uuid": "2664cd97-78df-412d-8d04-16fe9f878675", 00:18:59.479 "strip_size_kb": 0, 00:18:59.479 "state": "online", 00:18:59.479 "raid_level": "raid1", 00:18:59.479 "superblock": true, 00:18:59.479 "num_base_bdevs": 2, 00:18:59.479 "num_base_bdevs_discovered": 1, 00:18:59.479 "num_base_bdevs_operational": 1, 00:18:59.479 "base_bdevs_list": [ 00:18:59.479 { 00:18:59.479 "name": null, 00:18:59.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.479 "is_configured": false, 00:18:59.479 "data_offset": 0, 00:18:59.479 "data_size": 7936 00:18:59.479 }, 00:18:59.479 { 00:18:59.480 "name": "BaseBdev2", 00:18:59.480 "uuid": "be86aed2-78c6-47aa-982d-358d7ea7c8ce", 00:18:59.480 "is_configured": true, 00:18:59.480 "data_offset": 256, 00:18:59.480 "data_size": 7936 00:18:59.480 } 00:18:59.480 ] 00:18:59.480 }' 00:18:59.480 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:59.480 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:00.057 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:00.057 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:00.057 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.057 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.057 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:00.057 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:00.057 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.057 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:00.057 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:00.057 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:00.057 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.057 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:00.057 [2024-12-06 16:34:41.641633] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:00.057 [2024-12-06 16:34:41.641776] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:00.057 [2024-12-06 16:34:41.654626] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:00.057 [2024-12-06 16:34:41.654675] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:00.057 [2024-12-06 16:34:41.654688] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:19:00.057 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.057 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:00.057 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:00.057 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.057 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:00.057 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.057 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:00.057 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.057 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:00.057 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:00.057 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:00.057 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 98031 00:19:00.057 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 98031 ']' 00:19:00.057 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 98031 00:19:00.057 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:19:00.057 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:00.057 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98031 00:19:00.057 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:00.057 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:00.057 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98031' 00:19:00.057 killing process with pid 98031 00:19:00.057 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 98031 00:19:00.058 [2024-12-06 16:34:41.731772] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:00.058 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 98031 00:19:00.058 [2024-12-06 16:34:41.732799] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:00.315 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:19:00.315 00:19:00.315 real 0m3.732s 00:19:00.315 user 0m5.887s 00:19:00.315 sys 0m0.769s 00:19:00.316 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:00.316 16:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:00.316 ************************************ 00:19:00.316 END TEST raid_state_function_test_sb_md_separate 00:19:00.316 ************************************ 00:19:00.316 16:34:41 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:19:00.316 16:34:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:00.316 16:34:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:00.316 16:34:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:00.316 ************************************ 00:19:00.316 START TEST raid_superblock_test_md_separate 00:19:00.316 ************************************ 00:19:00.316 16:34:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:19:00.316 16:34:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:19:00.316 16:34:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:00.316 16:34:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:00.316 16:34:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:00.316 16:34:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:00.316 16:34:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:00.316 16:34:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:00.316 16:34:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:00.316 16:34:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:00.316 16:34:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:00.316 16:34:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:00.316 16:34:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:00.316 16:34:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:00.316 16:34:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:19:00.316 16:34:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:19:00.316 16:34:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=98263 00:19:00.316 16:34:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 98263 00:19:00.316 16:34:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:00.316 16:34:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 98263 ']' 00:19:00.316 16:34:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.316 16:34:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:00.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.316 16:34:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.316 16:34:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:00.316 16:34:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:00.316 [2024-12-06 16:34:42.079386] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:19:00.316 [2024-12-06 16:34:42.079531] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98263 ] 00:19:00.573 [2024-12-06 16:34:42.253534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.573 [2024-12-06 16:34:42.281004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.573 [2024-12-06 16:34:42.328235] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:00.573 [2024-12-06 16:34:42.328277] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:01.505 16:34:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:01.505 16:34:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:19:01.505 16:34:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:01.505 16:34:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:01.505 16:34:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:01.505 16:34:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:01.505 16:34:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:01.505 16:34:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:01.505 16:34:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:01.505 16:34:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:01.505 16:34:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:19:01.505 16:34:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.505 16:34:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:01.505 malloc1 00:19:01.505 16:34:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.505 16:34:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:01.505 16:34:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.505 16:34:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:01.505 [2024-12-06 16:34:43.007071] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:01.505 [2024-12-06 16:34:43.007220] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:01.505 [2024-12-06 16:34:43.007271] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:01.505 [2024-12-06 16:34:43.007334] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:01.505 [2024-12-06 16:34:43.009726] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:01.505 [2024-12-06 16:34:43.009809] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:01.505 pt1 00:19:01.505 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.505 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:01.505 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:01.505 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:01.505 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:01.505 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:01.505 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:01.505 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:01.505 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:01.505 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:19:01.505 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.505 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:01.505 malloc2 00:19:01.505 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.505 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:01.506 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.506 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:01.506 [2024-12-06 16:34:43.041305] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:01.506 [2024-12-06 16:34:43.041425] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:01.506 [2024-12-06 16:34:43.041489] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:01.506 [2024-12-06 16:34:43.041533] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:01.506 [2024-12-06 16:34:43.043821] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:01.506 [2024-12-06 16:34:43.043918] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:01.506 pt2 00:19:01.506 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.506 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:01.506 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:01.506 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:01.506 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.506 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:01.506 [2024-12-06 16:34:43.053325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:01.506 [2024-12-06 16:34:43.055494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:01.506 [2024-12-06 16:34:43.055691] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:19:01.506 [2024-12-06 16:34:43.055758] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:01.506 [2024-12-06 16:34:43.055888] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:19:01.506 [2024-12-06 16:34:43.055998] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:19:01.506 [2024-12-06 16:34:43.056009] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:19:01.506 [2024-12-06 16:34:43.056136] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:01.506 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.506 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:01.506 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:01.506 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:01.506 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:01.506 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:01.506 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:01.506 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:01.506 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:01.506 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:01.506 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:01.506 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.506 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.506 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.506 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:01.506 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.506 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:01.506 "name": "raid_bdev1", 00:19:01.506 "uuid": "bc82c501-6597-4f20-996a-1b128622d92a", 00:19:01.506 "strip_size_kb": 0, 00:19:01.506 "state": "online", 00:19:01.506 "raid_level": "raid1", 00:19:01.506 "superblock": true, 00:19:01.506 "num_base_bdevs": 2, 00:19:01.506 "num_base_bdevs_discovered": 2, 00:19:01.506 "num_base_bdevs_operational": 2, 00:19:01.506 "base_bdevs_list": [ 00:19:01.506 { 00:19:01.506 "name": "pt1", 00:19:01.506 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:01.506 "is_configured": true, 00:19:01.506 "data_offset": 256, 00:19:01.506 "data_size": 7936 00:19:01.506 }, 00:19:01.506 { 00:19:01.506 "name": "pt2", 00:19:01.506 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:01.506 "is_configured": true, 00:19:01.506 "data_offset": 256, 00:19:01.506 "data_size": 7936 00:19:01.506 } 00:19:01.506 ] 00:19:01.506 }' 00:19:01.506 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:01.506 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:01.764 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:01.764 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:01.764 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:01.764 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:01.764 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:19:01.764 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:01.764 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:01.764 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.764 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:01.764 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:01.764 [2024-12-06 16:34:43.477031] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:01.764 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.764 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:01.764 "name": "raid_bdev1", 00:19:01.764 "aliases": [ 00:19:01.764 "bc82c501-6597-4f20-996a-1b128622d92a" 00:19:01.764 ], 00:19:01.764 "product_name": "Raid Volume", 00:19:01.764 "block_size": 4096, 00:19:01.764 "num_blocks": 7936, 00:19:01.764 "uuid": "bc82c501-6597-4f20-996a-1b128622d92a", 00:19:01.764 "md_size": 32, 00:19:01.764 "md_interleave": false, 00:19:01.764 "dif_type": 0, 00:19:01.764 "assigned_rate_limits": { 00:19:01.764 "rw_ios_per_sec": 0, 00:19:01.764 "rw_mbytes_per_sec": 0, 00:19:01.764 "r_mbytes_per_sec": 0, 00:19:01.764 "w_mbytes_per_sec": 0 00:19:01.764 }, 00:19:01.764 "claimed": false, 00:19:01.764 "zoned": false, 00:19:01.764 "supported_io_types": { 00:19:01.764 "read": true, 00:19:01.764 "write": true, 00:19:01.764 "unmap": false, 00:19:01.764 "flush": false, 00:19:01.764 "reset": true, 00:19:01.764 "nvme_admin": false, 00:19:01.764 "nvme_io": false, 00:19:01.764 "nvme_io_md": false, 00:19:01.764 "write_zeroes": true, 00:19:01.764 "zcopy": false, 00:19:01.764 "get_zone_info": false, 00:19:01.764 "zone_management": false, 00:19:01.764 "zone_append": false, 00:19:01.764 "compare": false, 00:19:01.764 "compare_and_write": false, 00:19:01.764 "abort": false, 00:19:01.764 "seek_hole": false, 00:19:01.764 "seek_data": false, 00:19:01.764 "copy": false, 00:19:01.764 "nvme_iov_md": false 00:19:01.764 }, 00:19:01.764 "memory_domains": [ 00:19:01.764 { 00:19:01.764 "dma_device_id": "system", 00:19:01.764 "dma_device_type": 1 00:19:01.764 }, 00:19:01.764 { 00:19:01.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:01.764 "dma_device_type": 2 00:19:01.764 }, 00:19:01.764 { 00:19:01.764 "dma_device_id": "system", 00:19:01.764 "dma_device_type": 1 00:19:01.764 }, 00:19:01.764 { 00:19:01.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:01.764 "dma_device_type": 2 00:19:01.764 } 00:19:01.764 ], 00:19:01.764 "driver_specific": { 00:19:01.764 "raid": { 00:19:01.764 "uuid": "bc82c501-6597-4f20-996a-1b128622d92a", 00:19:01.764 "strip_size_kb": 0, 00:19:01.764 "state": "online", 00:19:01.764 "raid_level": "raid1", 00:19:01.764 "superblock": true, 00:19:01.764 "num_base_bdevs": 2, 00:19:01.764 "num_base_bdevs_discovered": 2, 00:19:01.764 "num_base_bdevs_operational": 2, 00:19:01.764 "base_bdevs_list": [ 00:19:01.764 { 00:19:01.764 "name": "pt1", 00:19:01.764 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:01.764 "is_configured": true, 00:19:01.764 "data_offset": 256, 00:19:01.764 "data_size": 7936 00:19:01.764 }, 00:19:01.764 { 00:19:01.764 "name": "pt2", 00:19:01.764 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:01.764 "is_configured": true, 00:19:01.764 "data_offset": 256, 00:19:01.764 "data_size": 7936 00:19:01.764 } 00:19:01.764 ] 00:19:01.764 } 00:19:01.764 } 00:19:01.764 }' 00:19:01.764 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:01.764 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:01.764 pt2' 00:19:01.764 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:01.764 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:19:01.764 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:01.764 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:01.764 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:01.764 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.764 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:02.022 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.022 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:02.022 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:02.022 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:02.022 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:02.022 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:02.022 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.022 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:02.022 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.022 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:02.022 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:02.022 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:02.022 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:02.023 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.023 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:02.023 [2024-12-06 16:34:43.692519] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:02.023 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.023 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=bc82c501-6597-4f20-996a-1b128622d92a 00:19:02.023 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z bc82c501-6597-4f20-996a-1b128622d92a ']' 00:19:02.023 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:02.023 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.023 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:02.023 [2024-12-06 16:34:43.720170] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:02.023 [2024-12-06 16:34:43.720249] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:02.023 [2024-12-06 16:34:43.720356] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:02.023 [2024-12-06 16:34:43.720472] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:02.023 [2024-12-06 16:34:43.720545] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:19:02.023 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.023 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.023 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:02.023 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.023 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:02.023 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.023 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:02.023 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:02.023 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:02.023 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:02.023 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.023 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:02.023 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.023 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:02.023 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:02.023 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.023 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:02.023 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.023 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:02.023 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:02.023 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.023 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:02.023 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.023 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:02.023 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:02.023 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:19:02.023 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:02.023 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:02.023 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:02.023 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:02.023 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:02.023 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:02.023 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.023 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:02.281 [2024-12-06 16:34:43.863957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:02.281 [2024-12-06 16:34:43.866113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:02.281 [2024-12-06 16:34:43.866239] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:02.281 [2024-12-06 16:34:43.866377] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:02.281 [2024-12-06 16:34:43.866452] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:02.281 [2024-12-06 16:34:43.866497] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:19:02.281 request: 00:19:02.281 { 00:19:02.281 "name": "raid_bdev1", 00:19:02.281 "raid_level": "raid1", 00:19:02.281 "base_bdevs": [ 00:19:02.281 "malloc1", 00:19:02.281 "malloc2" 00:19:02.281 ], 00:19:02.281 "superblock": false, 00:19:02.281 "method": "bdev_raid_create", 00:19:02.281 "req_id": 1 00:19:02.281 } 00:19:02.281 Got JSON-RPC error response 00:19:02.281 response: 00:19:02.281 { 00:19:02.281 "code": -17, 00:19:02.281 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:02.281 } 00:19:02.281 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:02.281 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:19:02.281 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:02.281 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:02.281 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:02.281 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.281 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:02.281 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.281 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:02.281 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.281 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:02.281 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:02.281 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:02.281 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.281 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:02.281 [2024-12-06 16:34:43.927830] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:02.281 [2024-12-06 16:34:43.927945] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:02.281 [2024-12-06 16:34:43.927986] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:02.281 [2024-12-06 16:34:43.928018] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:02.281 [2024-12-06 16:34:43.930206] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:02.281 [2024-12-06 16:34:43.930311] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:02.281 [2024-12-06 16:34:43.930428] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:02.281 [2024-12-06 16:34:43.930507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:02.281 pt1 00:19:02.281 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.281 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:02.281 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:02.281 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:02.281 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:02.281 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:02.281 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:02.281 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:02.281 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:02.281 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:02.281 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:02.281 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.281 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.281 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:02.281 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.281 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.281 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:02.281 "name": "raid_bdev1", 00:19:02.281 "uuid": "bc82c501-6597-4f20-996a-1b128622d92a", 00:19:02.281 "strip_size_kb": 0, 00:19:02.281 "state": "configuring", 00:19:02.281 "raid_level": "raid1", 00:19:02.281 "superblock": true, 00:19:02.281 "num_base_bdevs": 2, 00:19:02.281 "num_base_bdevs_discovered": 1, 00:19:02.281 "num_base_bdevs_operational": 2, 00:19:02.281 "base_bdevs_list": [ 00:19:02.281 { 00:19:02.281 "name": "pt1", 00:19:02.281 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:02.281 "is_configured": true, 00:19:02.281 "data_offset": 256, 00:19:02.281 "data_size": 7936 00:19:02.281 }, 00:19:02.281 { 00:19:02.281 "name": null, 00:19:02.281 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:02.281 "is_configured": false, 00:19:02.281 "data_offset": 256, 00:19:02.281 "data_size": 7936 00:19:02.281 } 00:19:02.281 ] 00:19:02.281 }' 00:19:02.281 16:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:02.281 16:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:02.538 16:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:02.538 16:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:02.538 16:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:02.538 16:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:02.538 16:34:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.538 16:34:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:02.538 [2024-12-06 16:34:44.363130] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:02.538 [2024-12-06 16:34:44.363274] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:02.538 [2024-12-06 16:34:44.363334] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:02.538 [2024-12-06 16:34:44.363389] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:02.538 [2024-12-06 16:34:44.363642] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:02.538 [2024-12-06 16:34:44.363714] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:02.538 [2024-12-06 16:34:44.363803] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:02.538 [2024-12-06 16:34:44.363856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:02.538 [2024-12-06 16:34:44.364021] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:19:02.538 [2024-12-06 16:34:44.364081] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:02.538 [2024-12-06 16:34:44.364195] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:19:02.538 [2024-12-06 16:34:44.364339] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:19:02.538 [2024-12-06 16:34:44.364387] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:19:02.538 [2024-12-06 16:34:44.364475] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:02.538 pt2 00:19:02.538 16:34:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.538 16:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:02.538 16:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:02.538 16:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:02.538 16:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:02.538 16:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:02.538 16:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:02.538 16:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:02.538 16:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:02.538 16:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:02.538 16:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:02.538 16:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:02.538 16:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:02.538 16:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.538 16:34:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.538 16:34:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:02.794 16:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.794 16:34:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.794 16:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:02.794 "name": "raid_bdev1", 00:19:02.794 "uuid": "bc82c501-6597-4f20-996a-1b128622d92a", 00:19:02.794 "strip_size_kb": 0, 00:19:02.794 "state": "online", 00:19:02.794 "raid_level": "raid1", 00:19:02.794 "superblock": true, 00:19:02.794 "num_base_bdevs": 2, 00:19:02.794 "num_base_bdevs_discovered": 2, 00:19:02.794 "num_base_bdevs_operational": 2, 00:19:02.794 "base_bdevs_list": [ 00:19:02.794 { 00:19:02.794 "name": "pt1", 00:19:02.794 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:02.794 "is_configured": true, 00:19:02.794 "data_offset": 256, 00:19:02.794 "data_size": 7936 00:19:02.794 }, 00:19:02.794 { 00:19:02.794 "name": "pt2", 00:19:02.794 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:02.794 "is_configured": true, 00:19:02.794 "data_offset": 256, 00:19:02.794 "data_size": 7936 00:19:02.794 } 00:19:02.795 ] 00:19:02.795 }' 00:19:02.795 16:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:02.795 16:34:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:03.052 16:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:03.052 16:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:03.052 16:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:03.052 16:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:03.052 16:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:19:03.052 16:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:03.052 16:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:03.052 16:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:03.052 16:34:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.052 16:34:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:03.052 [2024-12-06 16:34:44.782721] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:03.052 16:34:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.052 16:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:03.052 "name": "raid_bdev1", 00:19:03.052 "aliases": [ 00:19:03.052 "bc82c501-6597-4f20-996a-1b128622d92a" 00:19:03.052 ], 00:19:03.052 "product_name": "Raid Volume", 00:19:03.052 "block_size": 4096, 00:19:03.052 "num_blocks": 7936, 00:19:03.052 "uuid": "bc82c501-6597-4f20-996a-1b128622d92a", 00:19:03.052 "md_size": 32, 00:19:03.052 "md_interleave": false, 00:19:03.052 "dif_type": 0, 00:19:03.052 "assigned_rate_limits": { 00:19:03.052 "rw_ios_per_sec": 0, 00:19:03.052 "rw_mbytes_per_sec": 0, 00:19:03.052 "r_mbytes_per_sec": 0, 00:19:03.052 "w_mbytes_per_sec": 0 00:19:03.052 }, 00:19:03.052 "claimed": false, 00:19:03.052 "zoned": false, 00:19:03.052 "supported_io_types": { 00:19:03.052 "read": true, 00:19:03.052 "write": true, 00:19:03.052 "unmap": false, 00:19:03.052 "flush": false, 00:19:03.052 "reset": true, 00:19:03.052 "nvme_admin": false, 00:19:03.052 "nvme_io": false, 00:19:03.052 "nvme_io_md": false, 00:19:03.052 "write_zeroes": true, 00:19:03.052 "zcopy": false, 00:19:03.052 "get_zone_info": false, 00:19:03.052 "zone_management": false, 00:19:03.052 "zone_append": false, 00:19:03.052 "compare": false, 00:19:03.052 "compare_and_write": false, 00:19:03.052 "abort": false, 00:19:03.052 "seek_hole": false, 00:19:03.052 "seek_data": false, 00:19:03.052 "copy": false, 00:19:03.052 "nvme_iov_md": false 00:19:03.052 }, 00:19:03.052 "memory_domains": [ 00:19:03.052 { 00:19:03.052 "dma_device_id": "system", 00:19:03.052 "dma_device_type": 1 00:19:03.052 }, 00:19:03.052 { 00:19:03.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:03.052 "dma_device_type": 2 00:19:03.052 }, 00:19:03.052 { 00:19:03.052 "dma_device_id": "system", 00:19:03.052 "dma_device_type": 1 00:19:03.052 }, 00:19:03.052 { 00:19:03.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:03.052 "dma_device_type": 2 00:19:03.052 } 00:19:03.052 ], 00:19:03.052 "driver_specific": { 00:19:03.052 "raid": { 00:19:03.052 "uuid": "bc82c501-6597-4f20-996a-1b128622d92a", 00:19:03.052 "strip_size_kb": 0, 00:19:03.052 "state": "online", 00:19:03.052 "raid_level": "raid1", 00:19:03.052 "superblock": true, 00:19:03.052 "num_base_bdevs": 2, 00:19:03.052 "num_base_bdevs_discovered": 2, 00:19:03.052 "num_base_bdevs_operational": 2, 00:19:03.052 "base_bdevs_list": [ 00:19:03.052 { 00:19:03.052 "name": "pt1", 00:19:03.052 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:03.052 "is_configured": true, 00:19:03.052 "data_offset": 256, 00:19:03.052 "data_size": 7936 00:19:03.052 }, 00:19:03.052 { 00:19:03.052 "name": "pt2", 00:19:03.052 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:03.052 "is_configured": true, 00:19:03.052 "data_offset": 256, 00:19:03.052 "data_size": 7936 00:19:03.052 } 00:19:03.052 ] 00:19:03.052 } 00:19:03.052 } 00:19:03.052 }' 00:19:03.052 16:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:03.052 16:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:03.052 pt2' 00:19:03.052 16:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:03.052 16:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:19:03.052 16:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:03.052 16:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:03.052 16:34:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.052 16:34:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:03.310 16:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:03.310 16:34:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.310 16:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:03.310 16:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:03.310 16:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:03.310 16:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:03.310 16:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:03.310 16:34:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.310 16:34:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:03.310 16:34:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.310 16:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:03.310 16:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:03.310 16:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:03.310 16:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:03.310 16:34:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.310 16:34:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:03.310 [2024-12-06 16:34:44.998371] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:03.310 16:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.310 16:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' bc82c501-6597-4f20-996a-1b128622d92a '!=' bc82c501-6597-4f20-996a-1b128622d92a ']' 00:19:03.310 16:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:03.310 16:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:03.310 16:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:19:03.310 16:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:03.310 16:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.310 16:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:03.310 [2024-12-06 16:34:45.046008] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:03.310 16:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.310 16:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:03.310 16:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:03.310 16:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:03.310 16:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:03.310 16:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:03.310 16:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:03.310 16:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:03.310 16:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:03.310 16:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:03.310 16:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:03.311 16:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.311 16:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.311 16:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:03.311 16:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.311 16:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.311 16:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:03.311 "name": "raid_bdev1", 00:19:03.311 "uuid": "bc82c501-6597-4f20-996a-1b128622d92a", 00:19:03.311 "strip_size_kb": 0, 00:19:03.311 "state": "online", 00:19:03.311 "raid_level": "raid1", 00:19:03.311 "superblock": true, 00:19:03.311 "num_base_bdevs": 2, 00:19:03.311 "num_base_bdevs_discovered": 1, 00:19:03.311 "num_base_bdevs_operational": 1, 00:19:03.311 "base_bdevs_list": [ 00:19:03.311 { 00:19:03.311 "name": null, 00:19:03.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.311 "is_configured": false, 00:19:03.311 "data_offset": 0, 00:19:03.311 "data_size": 7936 00:19:03.311 }, 00:19:03.311 { 00:19:03.311 "name": "pt2", 00:19:03.311 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:03.311 "is_configured": true, 00:19:03.311 "data_offset": 256, 00:19:03.311 "data_size": 7936 00:19:03.311 } 00:19:03.311 ] 00:19:03.311 }' 00:19:03.311 16:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:03.311 16:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:03.877 16:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:03.877 16:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.877 16:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:03.877 [2024-12-06 16:34:45.473279] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:03.877 [2024-12-06 16:34:45.473358] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:03.877 [2024-12-06 16:34:45.473468] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:03.877 [2024-12-06 16:34:45.473544] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:03.877 [2024-12-06 16:34:45.473582] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:19:03.877 16:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.877 16:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:03.877 16:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.877 16:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.877 16:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:03.877 16:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.877 16:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:03.877 16:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:03.877 16:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:03.877 16:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:03.877 16:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:03.877 16:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.877 16:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:03.877 16:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.877 16:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:03.877 16:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:03.877 16:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:03.877 16:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:03.877 16:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:19:03.877 16:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:03.877 16:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.877 16:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:03.877 [2024-12-06 16:34:45.529165] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:03.877 [2024-12-06 16:34:45.529305] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:03.877 [2024-12-06 16:34:45.529360] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:03.877 [2024-12-06 16:34:45.529393] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:03.877 [2024-12-06 16:34:45.531582] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:03.877 [2024-12-06 16:34:45.531619] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:03.877 [2024-12-06 16:34:45.531676] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:03.877 [2024-12-06 16:34:45.531706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:03.877 [2024-12-06 16:34:45.531774] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:19:03.877 [2024-12-06 16:34:45.531782] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:03.877 [2024-12-06 16:34:45.531865] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:03.877 [2024-12-06 16:34:45.531950] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:19:03.877 [2024-12-06 16:34:45.531960] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:19:03.877 [2024-12-06 16:34:45.532028] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:03.877 pt2 00:19:03.877 16:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.877 16:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:03.877 16:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:03.877 16:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:03.877 16:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:03.877 16:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:03.877 16:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:03.877 16:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:03.877 16:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:03.877 16:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:03.877 16:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:03.877 16:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.877 16:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.877 16:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.877 16:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:03.877 16:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.877 16:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:03.877 "name": "raid_bdev1", 00:19:03.877 "uuid": "bc82c501-6597-4f20-996a-1b128622d92a", 00:19:03.877 "strip_size_kb": 0, 00:19:03.877 "state": "online", 00:19:03.877 "raid_level": "raid1", 00:19:03.877 "superblock": true, 00:19:03.877 "num_base_bdevs": 2, 00:19:03.877 "num_base_bdevs_discovered": 1, 00:19:03.877 "num_base_bdevs_operational": 1, 00:19:03.877 "base_bdevs_list": [ 00:19:03.877 { 00:19:03.877 "name": null, 00:19:03.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.877 "is_configured": false, 00:19:03.877 "data_offset": 256, 00:19:03.877 "data_size": 7936 00:19:03.877 }, 00:19:03.877 { 00:19:03.877 "name": "pt2", 00:19:03.877 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:03.877 "is_configured": true, 00:19:03.877 "data_offset": 256, 00:19:03.877 "data_size": 7936 00:19:03.877 } 00:19:03.877 ] 00:19:03.877 }' 00:19:03.877 16:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:03.877 16:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:04.443 16:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:04.443 16:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.443 16:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:04.443 [2024-12-06 16:34:45.988375] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:04.443 [2024-12-06 16:34:45.988454] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:04.443 [2024-12-06 16:34:45.988564] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:04.443 [2024-12-06 16:34:45.988647] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:04.443 [2024-12-06 16:34:45.988737] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:19:04.443 16:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.443 16:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:04.443 16:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.443 16:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.444 16:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:04.444 16:34:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.444 16:34:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:04.444 16:34:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:04.444 16:34:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:19:04.444 16:34:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:04.444 16:34:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.444 16:34:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:04.444 [2024-12-06 16:34:46.036325] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:04.444 [2024-12-06 16:34:46.036436] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:04.444 [2024-12-06 16:34:46.036485] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:04.444 [2024-12-06 16:34:46.036531] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:04.444 [2024-12-06 16:34:46.038724] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:04.444 [2024-12-06 16:34:46.038803] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:04.444 [2024-12-06 16:34:46.038888] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:04.444 [2024-12-06 16:34:46.038961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:04.444 [2024-12-06 16:34:46.039113] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:04.444 [2024-12-06 16:34:46.039176] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:04.444 [2024-12-06 16:34:46.039231] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:19:04.444 [2024-12-06 16:34:46.039322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:04.444 [2024-12-06 16:34:46.039434] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:19:04.444 [2024-12-06 16:34:46.039476] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:04.444 [2024-12-06 16:34:46.039568] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:04.444 [2024-12-06 16:34:46.039698] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:19:04.444 [2024-12-06 16:34:46.039741] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:19:04.444 [2024-12-06 16:34:46.039882] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:04.444 pt1 00:19:04.444 16:34:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.444 16:34:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:19:04.444 16:34:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:04.444 16:34:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:04.444 16:34:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:04.444 16:34:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:04.444 16:34:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:04.444 16:34:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:04.444 16:34:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:04.444 16:34:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:04.444 16:34:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:04.444 16:34:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:04.444 16:34:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.444 16:34:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.444 16:34:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.444 16:34:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:04.444 16:34:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.444 16:34:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:04.444 "name": "raid_bdev1", 00:19:04.444 "uuid": "bc82c501-6597-4f20-996a-1b128622d92a", 00:19:04.444 "strip_size_kb": 0, 00:19:04.444 "state": "online", 00:19:04.444 "raid_level": "raid1", 00:19:04.444 "superblock": true, 00:19:04.444 "num_base_bdevs": 2, 00:19:04.444 "num_base_bdevs_discovered": 1, 00:19:04.444 "num_base_bdevs_operational": 1, 00:19:04.444 "base_bdevs_list": [ 00:19:04.444 { 00:19:04.444 "name": null, 00:19:04.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.444 "is_configured": false, 00:19:04.444 "data_offset": 256, 00:19:04.444 "data_size": 7936 00:19:04.444 }, 00:19:04.444 { 00:19:04.444 "name": "pt2", 00:19:04.444 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:04.444 "is_configured": true, 00:19:04.444 "data_offset": 256, 00:19:04.444 "data_size": 7936 00:19:04.444 } 00:19:04.444 ] 00:19:04.444 }' 00:19:04.444 16:34:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:04.444 16:34:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:04.703 16:34:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:04.703 16:34:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.703 16:34:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:04.703 16:34:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:04.703 16:34:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.703 16:34:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:04.703 16:34:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:04.703 16:34:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.703 16:34:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:04.703 16:34:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:04.703 [2024-12-06 16:34:46.463969] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:04.703 16:34:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.703 16:34:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' bc82c501-6597-4f20-996a-1b128622d92a '!=' bc82c501-6597-4f20-996a-1b128622d92a ']' 00:19:04.703 16:34:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 98263 00:19:04.703 16:34:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 98263 ']' 00:19:04.703 16:34:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 98263 00:19:04.703 16:34:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:19:04.703 16:34:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:04.703 16:34:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98263 00:19:04.703 16:34:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:04.703 killing process with pid 98263 00:19:04.703 16:34:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:04.703 16:34:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98263' 00:19:04.703 16:34:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 98263 00:19:04.703 [2024-12-06 16:34:46.537805] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:04.703 [2024-12-06 16:34:46.537898] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:04.703 [2024-12-06 16:34:46.537953] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:04.703 [2024-12-06 16:34:46.537965] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:19:04.703 16:34:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 98263 00:19:04.962 [2024-12-06 16:34:46.563813] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:04.962 ************************************ 00:19:04.962 END TEST raid_superblock_test_md_separate 00:19:04.962 ************************************ 00:19:04.962 16:34:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:19:04.962 00:19:04.962 real 0m4.797s 00:19:04.962 user 0m7.751s 00:19:04.962 sys 0m1.058s 00:19:04.962 16:34:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:04.962 16:34:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:05.221 16:34:46 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:19:05.221 16:34:46 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:19:05.221 16:34:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:05.221 16:34:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:05.221 16:34:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:05.221 ************************************ 00:19:05.221 START TEST raid_rebuild_test_sb_md_separate 00:19:05.221 ************************************ 00:19:05.221 16:34:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:19:05.221 16:34:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:05.221 16:34:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:05.221 16:34:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:05.221 16:34:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:05.221 16:34:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:05.221 16:34:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:05.221 16:34:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:05.221 16:34:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:05.221 16:34:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:05.221 16:34:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:05.222 16:34:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:05.222 16:34:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:05.222 16:34:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:05.222 16:34:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:05.222 16:34:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:05.222 16:34:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:05.222 16:34:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:05.222 16:34:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:05.222 16:34:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:05.222 16:34:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:05.222 16:34:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:05.222 16:34:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:05.222 16:34:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:05.222 16:34:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:05.222 16:34:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=98575 00:19:05.222 16:34:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:05.222 16:34:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 98575 00:19:05.222 16:34:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 98575 ']' 00:19:05.222 16:34:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:05.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:05.222 16:34:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:05.222 16:34:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:05.222 16:34:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:05.222 16:34:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:05.222 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:05.222 Zero copy mechanism will not be used. 00:19:05.222 [2024-12-06 16:34:46.951055] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:19:05.222 [2024-12-06 16:34:46.951184] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98575 ] 00:19:05.481 [2024-12-06 16:34:47.124010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.481 [2024-12-06 16:34:47.151823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.481 [2024-12-06 16:34:47.195821] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:05.481 [2024-12-06 16:34:47.195858] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:06.050 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:06.050 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:19:06.050 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:06.050 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:19:06.050 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.050 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:06.050 BaseBdev1_malloc 00:19:06.050 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.050 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:06.050 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.050 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:06.050 [2024-12-06 16:34:47.853721] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:06.050 [2024-12-06 16:34:47.853838] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:06.050 [2024-12-06 16:34:47.853886] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:06.050 [2024-12-06 16:34:47.853946] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:06.050 [2024-12-06 16:34:47.855991] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:06.050 [2024-12-06 16:34:47.856072] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:06.050 BaseBdev1 00:19:06.050 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.050 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:06.050 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:19:06.050 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.050 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:06.050 BaseBdev2_malloc 00:19:06.050 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.050 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:06.050 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.050 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:06.050 [2024-12-06 16:34:47.875623] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:06.050 [2024-12-06 16:34:47.875737] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:06.050 [2024-12-06 16:34:47.875796] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:06.050 [2024-12-06 16:34:47.875835] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:06.050 [2024-12-06 16:34:47.878055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:06.050 [2024-12-06 16:34:47.878134] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:06.050 BaseBdev2 00:19:06.050 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.050 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:19:06.050 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.050 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:06.310 spare_malloc 00:19:06.310 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.310 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:06.310 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.310 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:06.310 spare_delay 00:19:06.310 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.310 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:06.310 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.310 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:06.310 [2024-12-06 16:34:47.921272] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:06.310 [2024-12-06 16:34:47.921366] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:06.310 [2024-12-06 16:34:47.921411] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:06.310 [2024-12-06 16:34:47.921444] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:06.310 [2024-12-06 16:34:47.923613] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:06.310 [2024-12-06 16:34:47.923687] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:06.310 spare 00:19:06.310 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.310 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:06.310 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.310 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:06.310 [2024-12-06 16:34:47.933287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:06.310 [2024-12-06 16:34:47.935419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:06.310 [2024-12-06 16:34:47.935592] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:19:06.310 [2024-12-06 16:34:47.935607] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:06.310 [2024-12-06 16:34:47.935708] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:19:06.310 [2024-12-06 16:34:47.935830] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:19:06.310 [2024-12-06 16:34:47.935843] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:19:06.310 [2024-12-06 16:34:47.935933] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:06.310 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.310 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:06.310 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:06.310 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:06.310 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:06.310 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:06.310 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:06.310 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:06.310 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:06.310 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:06.310 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:06.310 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.310 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.310 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.310 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:06.310 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.310 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:06.310 "name": "raid_bdev1", 00:19:06.310 "uuid": "7df28ea4-6867-425a-b359-b82034a97ff7", 00:19:06.310 "strip_size_kb": 0, 00:19:06.310 "state": "online", 00:19:06.310 "raid_level": "raid1", 00:19:06.310 "superblock": true, 00:19:06.310 "num_base_bdevs": 2, 00:19:06.310 "num_base_bdevs_discovered": 2, 00:19:06.310 "num_base_bdevs_operational": 2, 00:19:06.310 "base_bdevs_list": [ 00:19:06.310 { 00:19:06.310 "name": "BaseBdev1", 00:19:06.310 "uuid": "c5ada2ba-1dd3-5abc-ab76-f949e9e7e2e3", 00:19:06.310 "is_configured": true, 00:19:06.310 "data_offset": 256, 00:19:06.310 "data_size": 7936 00:19:06.310 }, 00:19:06.310 { 00:19:06.310 "name": "BaseBdev2", 00:19:06.310 "uuid": "8f8142f4-1b8b-5c40-97de-cb3de07e62ac", 00:19:06.310 "is_configured": true, 00:19:06.310 "data_offset": 256, 00:19:06.310 "data_size": 7936 00:19:06.310 } 00:19:06.310 ] 00:19:06.310 }' 00:19:06.310 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:06.310 16:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:06.570 16:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:06.570 16:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:06.570 16:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.570 16:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:06.570 [2024-12-06 16:34:48.336908] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:06.570 16:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.570 16:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:19:06.570 16:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.570 16:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.570 16:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:06.570 16:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:06.570 16:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.829 16:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:19:06.829 16:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:06.829 16:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:06.829 16:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:06.829 16:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:06.829 16:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:06.829 16:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:06.829 16:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:06.829 16:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:06.829 16:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:06.829 16:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:19:06.829 16:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:06.829 16:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:06.829 16:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:06.829 [2024-12-06 16:34:48.612216] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:06.829 /dev/nbd0 00:19:06.829 16:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:06.829 16:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:06.829 16:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:06.829 16:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:19:06.829 16:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:06.829 16:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:06.829 16:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:06.829 16:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:19:06.829 16:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:06.829 16:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:06.829 16:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:06.829 1+0 records in 00:19:06.829 1+0 records out 00:19:06.829 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224016 s, 18.3 MB/s 00:19:06.829 16:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:06.829 16:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:19:06.829 16:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:07.088 16:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:07.088 16:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:19:07.088 16:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:07.088 16:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:07.088 16:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:19:07.088 16:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:19:07.088 16:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:19:07.672 7936+0 records in 00:19:07.672 7936+0 records out 00:19:07.672 32505856 bytes (33 MB, 31 MiB) copied, 0.590078 s, 55.1 MB/s 00:19:07.672 16:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:07.672 16:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:07.672 16:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:07.672 16:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:07.672 16:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:19:07.672 16:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:07.672 16:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:07.672 16:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:07.672 [2024-12-06 16:34:49.475582] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:07.672 16:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:07.672 16:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:07.672 16:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:07.672 16:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:07.672 16:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:07.672 16:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:19:07.672 16:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:19:07.672 16:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:07.672 16:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.672 16:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:07.672 [2024-12-06 16:34:49.495644] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:07.672 16:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.672 16:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:07.672 16:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:07.672 16:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:07.672 16:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:07.672 16:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:07.672 16:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:07.672 16:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:07.672 16:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:07.672 16:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:07.672 16:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:07.672 16:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.672 16:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.672 16:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.672 16:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:07.932 16:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.932 16:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:07.932 "name": "raid_bdev1", 00:19:07.932 "uuid": "7df28ea4-6867-425a-b359-b82034a97ff7", 00:19:07.932 "strip_size_kb": 0, 00:19:07.932 "state": "online", 00:19:07.932 "raid_level": "raid1", 00:19:07.932 "superblock": true, 00:19:07.932 "num_base_bdevs": 2, 00:19:07.932 "num_base_bdevs_discovered": 1, 00:19:07.932 "num_base_bdevs_operational": 1, 00:19:07.932 "base_bdevs_list": [ 00:19:07.932 { 00:19:07.932 "name": null, 00:19:07.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.932 "is_configured": false, 00:19:07.932 "data_offset": 0, 00:19:07.932 "data_size": 7936 00:19:07.932 }, 00:19:07.932 { 00:19:07.932 "name": "BaseBdev2", 00:19:07.932 "uuid": "8f8142f4-1b8b-5c40-97de-cb3de07e62ac", 00:19:07.932 "is_configured": true, 00:19:07.932 "data_offset": 256, 00:19:07.932 "data_size": 7936 00:19:07.932 } 00:19:07.932 ] 00:19:07.932 }' 00:19:07.932 16:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:07.932 16:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:08.193 16:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:08.193 16:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.193 16:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:08.193 [2024-12-06 16:34:49.974881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:08.193 [2024-12-06 16:34:49.977654] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d0c0 00:19:08.193 [2024-12-06 16:34:49.979623] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:08.193 16:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.193 16:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:09.575 16:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:09.575 16:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:09.575 16:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:09.575 16:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:09.575 16:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:09.575 16:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.575 16:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.575 16:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.575 16:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:09.575 16:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.575 16:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:09.575 "name": "raid_bdev1", 00:19:09.575 "uuid": "7df28ea4-6867-425a-b359-b82034a97ff7", 00:19:09.575 "strip_size_kb": 0, 00:19:09.575 "state": "online", 00:19:09.575 "raid_level": "raid1", 00:19:09.575 "superblock": true, 00:19:09.575 "num_base_bdevs": 2, 00:19:09.575 "num_base_bdevs_discovered": 2, 00:19:09.575 "num_base_bdevs_operational": 2, 00:19:09.575 "process": { 00:19:09.575 "type": "rebuild", 00:19:09.575 "target": "spare", 00:19:09.575 "progress": { 00:19:09.575 "blocks": 2560, 00:19:09.575 "percent": 32 00:19:09.575 } 00:19:09.575 }, 00:19:09.575 "base_bdevs_list": [ 00:19:09.575 { 00:19:09.575 "name": "spare", 00:19:09.575 "uuid": "1f05d5f8-b492-5ef6-bfc2-ca878cbc5a45", 00:19:09.575 "is_configured": true, 00:19:09.575 "data_offset": 256, 00:19:09.575 "data_size": 7936 00:19:09.575 }, 00:19:09.575 { 00:19:09.575 "name": "BaseBdev2", 00:19:09.575 "uuid": "8f8142f4-1b8b-5c40-97de-cb3de07e62ac", 00:19:09.575 "is_configured": true, 00:19:09.575 "data_offset": 256, 00:19:09.575 "data_size": 7936 00:19:09.575 } 00:19:09.575 ] 00:19:09.575 }' 00:19:09.575 16:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:09.575 16:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:09.575 16:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:09.575 16:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:09.575 16:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:09.575 16:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.575 16:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:09.575 [2024-12-06 16:34:51.146454] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:09.575 [2024-12-06 16:34:51.185416] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:09.575 [2024-12-06 16:34:51.185484] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:09.575 [2024-12-06 16:34:51.185504] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:09.575 [2024-12-06 16:34:51.185512] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:09.575 16:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.575 16:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:09.575 16:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:09.575 16:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:09.575 16:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:09.575 16:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:09.575 16:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:09.575 16:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:09.575 16:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:09.575 16:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:09.575 16:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:09.575 16:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.575 16:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.575 16:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.575 16:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:09.575 16:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.575 16:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:09.575 "name": "raid_bdev1", 00:19:09.575 "uuid": "7df28ea4-6867-425a-b359-b82034a97ff7", 00:19:09.575 "strip_size_kb": 0, 00:19:09.575 "state": "online", 00:19:09.575 "raid_level": "raid1", 00:19:09.575 "superblock": true, 00:19:09.575 "num_base_bdevs": 2, 00:19:09.575 "num_base_bdevs_discovered": 1, 00:19:09.575 "num_base_bdevs_operational": 1, 00:19:09.575 "base_bdevs_list": [ 00:19:09.575 { 00:19:09.575 "name": null, 00:19:09.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.575 "is_configured": false, 00:19:09.575 "data_offset": 0, 00:19:09.575 "data_size": 7936 00:19:09.575 }, 00:19:09.575 { 00:19:09.575 "name": "BaseBdev2", 00:19:09.575 "uuid": "8f8142f4-1b8b-5c40-97de-cb3de07e62ac", 00:19:09.575 "is_configured": true, 00:19:09.575 "data_offset": 256, 00:19:09.575 "data_size": 7936 00:19:09.575 } 00:19:09.575 ] 00:19:09.575 }' 00:19:09.575 16:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:09.575 16:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:09.835 16:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:09.835 16:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:09.835 16:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:09.835 16:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:09.835 16:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:09.835 16:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.835 16:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.835 16:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.835 16:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:09.835 16:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.835 16:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:09.835 "name": "raid_bdev1", 00:19:09.835 "uuid": "7df28ea4-6867-425a-b359-b82034a97ff7", 00:19:09.835 "strip_size_kb": 0, 00:19:09.835 "state": "online", 00:19:09.835 "raid_level": "raid1", 00:19:09.835 "superblock": true, 00:19:09.835 "num_base_bdevs": 2, 00:19:09.835 "num_base_bdevs_discovered": 1, 00:19:09.835 "num_base_bdevs_operational": 1, 00:19:09.835 "base_bdevs_list": [ 00:19:09.835 { 00:19:09.835 "name": null, 00:19:09.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.835 "is_configured": false, 00:19:09.835 "data_offset": 0, 00:19:09.835 "data_size": 7936 00:19:09.835 }, 00:19:09.835 { 00:19:09.835 "name": "BaseBdev2", 00:19:09.835 "uuid": "8f8142f4-1b8b-5c40-97de-cb3de07e62ac", 00:19:09.835 "is_configured": true, 00:19:09.835 "data_offset": 256, 00:19:09.835 "data_size": 7936 00:19:09.835 } 00:19:09.835 ] 00:19:09.835 }' 00:19:09.835 16:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:09.835 16:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:09.835 16:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:10.095 16:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:10.095 16:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:10.095 16:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.095 16:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:10.095 [2024-12-06 16:34:51.720108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:10.095 [2024-12-06 16:34:51.722794] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d190 00:19:10.095 [2024-12-06 16:34:51.724927] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:10.095 16:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.095 16:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:11.042 16:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:11.042 16:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:11.042 16:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:11.042 16:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:11.042 16:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:11.042 16:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.042 16:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.042 16:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.042 16:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:11.042 16:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.042 16:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:11.042 "name": "raid_bdev1", 00:19:11.042 "uuid": "7df28ea4-6867-425a-b359-b82034a97ff7", 00:19:11.042 "strip_size_kb": 0, 00:19:11.042 "state": "online", 00:19:11.042 "raid_level": "raid1", 00:19:11.042 "superblock": true, 00:19:11.042 "num_base_bdevs": 2, 00:19:11.042 "num_base_bdevs_discovered": 2, 00:19:11.042 "num_base_bdevs_operational": 2, 00:19:11.042 "process": { 00:19:11.042 "type": "rebuild", 00:19:11.042 "target": "spare", 00:19:11.042 "progress": { 00:19:11.042 "blocks": 2560, 00:19:11.042 "percent": 32 00:19:11.042 } 00:19:11.042 }, 00:19:11.042 "base_bdevs_list": [ 00:19:11.042 { 00:19:11.042 "name": "spare", 00:19:11.042 "uuid": "1f05d5f8-b492-5ef6-bfc2-ca878cbc5a45", 00:19:11.042 "is_configured": true, 00:19:11.042 "data_offset": 256, 00:19:11.042 "data_size": 7936 00:19:11.042 }, 00:19:11.042 { 00:19:11.042 "name": "BaseBdev2", 00:19:11.042 "uuid": "8f8142f4-1b8b-5c40-97de-cb3de07e62ac", 00:19:11.042 "is_configured": true, 00:19:11.042 "data_offset": 256, 00:19:11.042 "data_size": 7936 00:19:11.042 } 00:19:11.042 ] 00:19:11.042 }' 00:19:11.042 16:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:11.042 16:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:11.042 16:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:11.042 16:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:11.042 16:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:11.042 16:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:11.042 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:11.042 16:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:11.042 16:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:11.042 16:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:11.042 16:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=602 00:19:11.042 16:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:11.042 16:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:11.042 16:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:11.042 16:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:11.042 16:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:11.042 16:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:11.042 16:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.042 16:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.042 16:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:11.042 16:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.042 16:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.302 16:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:11.302 "name": "raid_bdev1", 00:19:11.302 "uuid": "7df28ea4-6867-425a-b359-b82034a97ff7", 00:19:11.302 "strip_size_kb": 0, 00:19:11.302 "state": "online", 00:19:11.302 "raid_level": "raid1", 00:19:11.302 "superblock": true, 00:19:11.302 "num_base_bdevs": 2, 00:19:11.302 "num_base_bdevs_discovered": 2, 00:19:11.302 "num_base_bdevs_operational": 2, 00:19:11.302 "process": { 00:19:11.302 "type": "rebuild", 00:19:11.302 "target": "spare", 00:19:11.302 "progress": { 00:19:11.302 "blocks": 2816, 00:19:11.302 "percent": 35 00:19:11.302 } 00:19:11.302 }, 00:19:11.302 "base_bdevs_list": [ 00:19:11.302 { 00:19:11.302 "name": "spare", 00:19:11.303 "uuid": "1f05d5f8-b492-5ef6-bfc2-ca878cbc5a45", 00:19:11.303 "is_configured": true, 00:19:11.303 "data_offset": 256, 00:19:11.303 "data_size": 7936 00:19:11.303 }, 00:19:11.303 { 00:19:11.303 "name": "BaseBdev2", 00:19:11.303 "uuid": "8f8142f4-1b8b-5c40-97de-cb3de07e62ac", 00:19:11.303 "is_configured": true, 00:19:11.303 "data_offset": 256, 00:19:11.303 "data_size": 7936 00:19:11.303 } 00:19:11.303 ] 00:19:11.303 }' 00:19:11.303 16:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:11.303 16:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:11.303 16:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:11.303 16:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:11.303 16:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:12.241 16:34:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:12.241 16:34:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:12.241 16:34:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:12.241 16:34:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:12.241 16:34:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:12.241 16:34:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:12.241 16:34:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.241 16:34:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.241 16:34:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.241 16:34:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:12.241 16:34:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.241 16:34:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:12.241 "name": "raid_bdev1", 00:19:12.241 "uuid": "7df28ea4-6867-425a-b359-b82034a97ff7", 00:19:12.241 "strip_size_kb": 0, 00:19:12.241 "state": "online", 00:19:12.241 "raid_level": "raid1", 00:19:12.241 "superblock": true, 00:19:12.241 "num_base_bdevs": 2, 00:19:12.241 "num_base_bdevs_discovered": 2, 00:19:12.241 "num_base_bdevs_operational": 2, 00:19:12.241 "process": { 00:19:12.241 "type": "rebuild", 00:19:12.241 "target": "spare", 00:19:12.241 "progress": { 00:19:12.241 "blocks": 5632, 00:19:12.241 "percent": 70 00:19:12.241 } 00:19:12.241 }, 00:19:12.241 "base_bdevs_list": [ 00:19:12.241 { 00:19:12.241 "name": "spare", 00:19:12.241 "uuid": "1f05d5f8-b492-5ef6-bfc2-ca878cbc5a45", 00:19:12.241 "is_configured": true, 00:19:12.241 "data_offset": 256, 00:19:12.241 "data_size": 7936 00:19:12.241 }, 00:19:12.241 { 00:19:12.241 "name": "BaseBdev2", 00:19:12.241 "uuid": "8f8142f4-1b8b-5c40-97de-cb3de07e62ac", 00:19:12.241 "is_configured": true, 00:19:12.241 "data_offset": 256, 00:19:12.241 "data_size": 7936 00:19:12.241 } 00:19:12.241 ] 00:19:12.241 }' 00:19:12.241 16:34:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:12.501 16:34:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:12.501 16:34:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:12.501 16:34:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:12.501 16:34:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:13.070 [2024-12-06 16:34:54.838161] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:13.070 [2024-12-06 16:34:54.838365] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:13.070 [2024-12-06 16:34:54.838482] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:13.329 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:13.329 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:13.329 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:13.329 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:13.329 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:13.329 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:13.329 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.329 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.329 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:13.329 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.587 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.587 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:13.587 "name": "raid_bdev1", 00:19:13.587 "uuid": "7df28ea4-6867-425a-b359-b82034a97ff7", 00:19:13.587 "strip_size_kb": 0, 00:19:13.587 "state": "online", 00:19:13.587 "raid_level": "raid1", 00:19:13.587 "superblock": true, 00:19:13.587 "num_base_bdevs": 2, 00:19:13.587 "num_base_bdevs_discovered": 2, 00:19:13.587 "num_base_bdevs_operational": 2, 00:19:13.587 "base_bdevs_list": [ 00:19:13.587 { 00:19:13.587 "name": "spare", 00:19:13.587 "uuid": "1f05d5f8-b492-5ef6-bfc2-ca878cbc5a45", 00:19:13.587 "is_configured": true, 00:19:13.587 "data_offset": 256, 00:19:13.587 "data_size": 7936 00:19:13.587 }, 00:19:13.587 { 00:19:13.587 "name": "BaseBdev2", 00:19:13.587 "uuid": "8f8142f4-1b8b-5c40-97de-cb3de07e62ac", 00:19:13.587 "is_configured": true, 00:19:13.587 "data_offset": 256, 00:19:13.587 "data_size": 7936 00:19:13.587 } 00:19:13.587 ] 00:19:13.587 }' 00:19:13.587 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:13.587 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:13.587 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:13.587 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:13.587 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:19:13.587 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:13.587 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:13.587 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:13.587 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:13.587 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:13.587 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.587 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.587 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.587 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:13.587 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.587 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:13.587 "name": "raid_bdev1", 00:19:13.587 "uuid": "7df28ea4-6867-425a-b359-b82034a97ff7", 00:19:13.587 "strip_size_kb": 0, 00:19:13.587 "state": "online", 00:19:13.587 "raid_level": "raid1", 00:19:13.587 "superblock": true, 00:19:13.587 "num_base_bdevs": 2, 00:19:13.587 "num_base_bdevs_discovered": 2, 00:19:13.587 "num_base_bdevs_operational": 2, 00:19:13.587 "base_bdevs_list": [ 00:19:13.587 { 00:19:13.587 "name": "spare", 00:19:13.587 "uuid": "1f05d5f8-b492-5ef6-bfc2-ca878cbc5a45", 00:19:13.587 "is_configured": true, 00:19:13.587 "data_offset": 256, 00:19:13.587 "data_size": 7936 00:19:13.587 }, 00:19:13.587 { 00:19:13.587 "name": "BaseBdev2", 00:19:13.587 "uuid": "8f8142f4-1b8b-5c40-97de-cb3de07e62ac", 00:19:13.587 "is_configured": true, 00:19:13.587 "data_offset": 256, 00:19:13.587 "data_size": 7936 00:19:13.587 } 00:19:13.587 ] 00:19:13.587 }' 00:19:13.587 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:13.587 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:13.587 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:13.587 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:13.587 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:13.587 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:13.587 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:13.587 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:13.587 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:13.587 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:13.587 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:13.587 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:13.587 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:13.587 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:13.846 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.846 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.846 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.846 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:13.846 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.846 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:13.847 "name": "raid_bdev1", 00:19:13.847 "uuid": "7df28ea4-6867-425a-b359-b82034a97ff7", 00:19:13.847 "strip_size_kb": 0, 00:19:13.847 "state": "online", 00:19:13.847 "raid_level": "raid1", 00:19:13.847 "superblock": true, 00:19:13.847 "num_base_bdevs": 2, 00:19:13.847 "num_base_bdevs_discovered": 2, 00:19:13.847 "num_base_bdevs_operational": 2, 00:19:13.847 "base_bdevs_list": [ 00:19:13.847 { 00:19:13.847 "name": "spare", 00:19:13.847 "uuid": "1f05d5f8-b492-5ef6-bfc2-ca878cbc5a45", 00:19:13.847 "is_configured": true, 00:19:13.847 "data_offset": 256, 00:19:13.847 "data_size": 7936 00:19:13.847 }, 00:19:13.847 { 00:19:13.847 "name": "BaseBdev2", 00:19:13.847 "uuid": "8f8142f4-1b8b-5c40-97de-cb3de07e62ac", 00:19:13.847 "is_configured": true, 00:19:13.847 "data_offset": 256, 00:19:13.847 "data_size": 7936 00:19:13.847 } 00:19:13.847 ] 00:19:13.847 }' 00:19:13.847 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:13.847 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:14.107 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:14.107 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.107 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:14.107 [2024-12-06 16:34:55.856660] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:14.107 [2024-12-06 16:34:55.856744] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:14.107 [2024-12-06 16:34:55.856883] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:14.107 [2024-12-06 16:34:55.856990] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:14.107 [2024-12-06 16:34:55.857045] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:19:14.107 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.107 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.107 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.107 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:14.107 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:19:14.107 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.107 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:14.107 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:14.107 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:14.107 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:14.107 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:14.107 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:14.107 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:14.107 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:14.107 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:14.107 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:19:14.107 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:14.107 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:14.107 16:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:14.365 /dev/nbd0 00:19:14.365 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:14.366 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:14.366 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:14.366 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:19:14.366 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:14.366 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:14.366 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:14.366 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:19:14.366 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:14.366 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:14.366 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:14.366 1+0 records in 00:19:14.366 1+0 records out 00:19:14.366 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00052992 s, 7.7 MB/s 00:19:14.366 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:14.366 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:19:14.366 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:14.366 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:14.366 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:19:14.366 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:14.366 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:14.366 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:14.625 /dev/nbd1 00:19:14.625 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:14.625 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:14.625 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:14.625 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:19:14.625 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:14.625 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:14.625 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:14.625 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:19:14.625 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:14.625 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:14.625 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:14.625 1+0 records in 00:19:14.625 1+0 records out 00:19:14.625 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000462348 s, 8.9 MB/s 00:19:14.625 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:14.625 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:19:14.625 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:14.625 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:14.625 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:19:14.625 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:14.625 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:14.625 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:14.884 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:14.884 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:14.884 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:14.884 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:14.884 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:19:14.884 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:14.884 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:14.884 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:14.884 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:14.884 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:14.884 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:14.884 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:14.884 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:14.884 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:19:14.884 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:19:14.884 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:14.884 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:15.145 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:15.145 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:15.145 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:15.145 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:15.145 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:15.145 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:15.145 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:19:15.145 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:19:15.145 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:15.145 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:15.145 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.145 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:15.145 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.145 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:15.145 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.145 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:15.145 [2024-12-06 16:34:56.971778] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:15.145 [2024-12-06 16:34:56.971893] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:15.145 [2024-12-06 16:34:56.971944] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:15.145 [2024-12-06 16:34:56.971986] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:15.145 [2024-12-06 16:34:56.974078] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:15.145 [2024-12-06 16:34:56.974166] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:15.145 [2024-12-06 16:34:56.974262] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:15.145 [2024-12-06 16:34:56.974341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:15.145 [2024-12-06 16:34:56.974485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:15.145 spare 00:19:15.145 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.145 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:15.145 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.145 16:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:15.405 [2024-12-06 16:34:57.074434] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:19:15.405 [2024-12-06 16:34:57.074516] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:15.405 [2024-12-06 16:34:57.074641] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c19b0 00:19:15.405 [2024-12-06 16:34:57.074774] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:19:15.405 [2024-12-06 16:34:57.074793] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:19:15.405 [2024-12-06 16:34:57.074915] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:15.405 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.405 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:15.405 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:15.405 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:15.405 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:15.405 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:15.405 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:15.405 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:15.405 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:15.405 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:15.405 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:15.405 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.405 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.405 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.405 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:15.405 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.405 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:15.405 "name": "raid_bdev1", 00:19:15.405 "uuid": "7df28ea4-6867-425a-b359-b82034a97ff7", 00:19:15.405 "strip_size_kb": 0, 00:19:15.405 "state": "online", 00:19:15.405 "raid_level": "raid1", 00:19:15.405 "superblock": true, 00:19:15.405 "num_base_bdevs": 2, 00:19:15.405 "num_base_bdevs_discovered": 2, 00:19:15.405 "num_base_bdevs_operational": 2, 00:19:15.405 "base_bdevs_list": [ 00:19:15.405 { 00:19:15.405 "name": "spare", 00:19:15.405 "uuid": "1f05d5f8-b492-5ef6-bfc2-ca878cbc5a45", 00:19:15.405 "is_configured": true, 00:19:15.405 "data_offset": 256, 00:19:15.405 "data_size": 7936 00:19:15.405 }, 00:19:15.405 { 00:19:15.405 "name": "BaseBdev2", 00:19:15.405 "uuid": "8f8142f4-1b8b-5c40-97de-cb3de07e62ac", 00:19:15.405 "is_configured": true, 00:19:15.405 "data_offset": 256, 00:19:15.405 "data_size": 7936 00:19:15.405 } 00:19:15.405 ] 00:19:15.405 }' 00:19:15.405 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:15.405 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:15.974 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:15.975 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:15.975 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:15.975 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:15.975 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:15.975 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.975 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.975 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.975 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:15.975 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.975 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:15.975 "name": "raid_bdev1", 00:19:15.975 "uuid": "7df28ea4-6867-425a-b359-b82034a97ff7", 00:19:15.975 "strip_size_kb": 0, 00:19:15.975 "state": "online", 00:19:15.975 "raid_level": "raid1", 00:19:15.975 "superblock": true, 00:19:15.975 "num_base_bdevs": 2, 00:19:15.975 "num_base_bdevs_discovered": 2, 00:19:15.975 "num_base_bdevs_operational": 2, 00:19:15.975 "base_bdevs_list": [ 00:19:15.975 { 00:19:15.975 "name": "spare", 00:19:15.975 "uuid": "1f05d5f8-b492-5ef6-bfc2-ca878cbc5a45", 00:19:15.975 "is_configured": true, 00:19:15.975 "data_offset": 256, 00:19:15.975 "data_size": 7936 00:19:15.975 }, 00:19:15.975 { 00:19:15.975 "name": "BaseBdev2", 00:19:15.975 "uuid": "8f8142f4-1b8b-5c40-97de-cb3de07e62ac", 00:19:15.975 "is_configured": true, 00:19:15.975 "data_offset": 256, 00:19:15.975 "data_size": 7936 00:19:15.975 } 00:19:15.975 ] 00:19:15.975 }' 00:19:15.975 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:15.975 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:15.975 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:15.975 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:15.975 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.975 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.975 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:15.975 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:15.975 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.975 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:15.975 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:15.975 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.975 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:15.975 [2024-12-06 16:34:57.682583] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:15.975 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.975 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:15.975 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:15.975 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:15.975 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:15.975 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:15.975 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:15.975 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:15.975 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:15.975 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:15.975 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:15.975 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.975 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.975 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.975 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:15.975 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.975 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:15.975 "name": "raid_bdev1", 00:19:15.975 "uuid": "7df28ea4-6867-425a-b359-b82034a97ff7", 00:19:15.975 "strip_size_kb": 0, 00:19:15.975 "state": "online", 00:19:15.975 "raid_level": "raid1", 00:19:15.975 "superblock": true, 00:19:15.975 "num_base_bdevs": 2, 00:19:15.975 "num_base_bdevs_discovered": 1, 00:19:15.975 "num_base_bdevs_operational": 1, 00:19:15.975 "base_bdevs_list": [ 00:19:15.975 { 00:19:15.975 "name": null, 00:19:15.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.975 "is_configured": false, 00:19:15.975 "data_offset": 0, 00:19:15.975 "data_size": 7936 00:19:15.975 }, 00:19:15.975 { 00:19:15.975 "name": "BaseBdev2", 00:19:15.975 "uuid": "8f8142f4-1b8b-5c40-97de-cb3de07e62ac", 00:19:15.975 "is_configured": true, 00:19:15.975 "data_offset": 256, 00:19:15.975 "data_size": 7936 00:19:15.975 } 00:19:15.975 ] 00:19:15.975 }' 00:19:15.975 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:15.975 16:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:16.545 16:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:16.545 16:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.545 16:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:16.545 [2024-12-06 16:34:58.149824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:16.545 [2024-12-06 16:34:58.150062] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:16.545 [2024-12-06 16:34:58.150120] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:16.545 [2024-12-06 16:34:58.150180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:16.545 [2024-12-06 16:34:58.152570] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1a80 00:19:16.545 [2024-12-06 16:34:58.154482] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:16.545 16:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.545 16:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:17.485 16:34:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:17.485 16:34:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:17.485 16:34:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:17.485 16:34:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:17.485 16:34:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:17.485 16:34:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.485 16:34:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.485 16:34:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.485 16:34:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:17.485 16:34:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.485 16:34:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:17.485 "name": "raid_bdev1", 00:19:17.485 "uuid": "7df28ea4-6867-425a-b359-b82034a97ff7", 00:19:17.485 "strip_size_kb": 0, 00:19:17.485 "state": "online", 00:19:17.485 "raid_level": "raid1", 00:19:17.485 "superblock": true, 00:19:17.485 "num_base_bdevs": 2, 00:19:17.485 "num_base_bdevs_discovered": 2, 00:19:17.485 "num_base_bdevs_operational": 2, 00:19:17.485 "process": { 00:19:17.485 "type": "rebuild", 00:19:17.485 "target": "spare", 00:19:17.485 "progress": { 00:19:17.485 "blocks": 2560, 00:19:17.485 "percent": 32 00:19:17.485 } 00:19:17.485 }, 00:19:17.485 "base_bdevs_list": [ 00:19:17.485 { 00:19:17.485 "name": "spare", 00:19:17.485 "uuid": "1f05d5f8-b492-5ef6-bfc2-ca878cbc5a45", 00:19:17.485 "is_configured": true, 00:19:17.485 "data_offset": 256, 00:19:17.485 "data_size": 7936 00:19:17.485 }, 00:19:17.485 { 00:19:17.485 "name": "BaseBdev2", 00:19:17.485 "uuid": "8f8142f4-1b8b-5c40-97de-cb3de07e62ac", 00:19:17.485 "is_configured": true, 00:19:17.485 "data_offset": 256, 00:19:17.485 "data_size": 7936 00:19:17.485 } 00:19:17.485 ] 00:19:17.485 }' 00:19:17.485 16:34:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:17.485 16:34:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:17.485 16:34:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:17.485 16:34:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:17.485 16:34:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:17.485 16:34:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.485 16:34:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:17.485 [2024-12-06 16:34:59.317280] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:17.745 [2024-12-06 16:34:59.358936] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:17.745 [2024-12-06 16:34:59.359054] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:17.745 [2024-12-06 16:34:59.359093] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:17.745 [2024-12-06 16:34:59.359130] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:17.745 16:34:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.745 16:34:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:17.745 16:34:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:17.745 16:34:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:17.745 16:34:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:17.745 16:34:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:17.745 16:34:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:17.745 16:34:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:17.745 16:34:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:17.745 16:34:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:17.745 16:34:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:17.745 16:34:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.745 16:34:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.745 16:34:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.745 16:34:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:17.745 16:34:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.745 16:34:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:17.745 "name": "raid_bdev1", 00:19:17.746 "uuid": "7df28ea4-6867-425a-b359-b82034a97ff7", 00:19:17.746 "strip_size_kb": 0, 00:19:17.746 "state": "online", 00:19:17.746 "raid_level": "raid1", 00:19:17.746 "superblock": true, 00:19:17.746 "num_base_bdevs": 2, 00:19:17.746 "num_base_bdevs_discovered": 1, 00:19:17.746 "num_base_bdevs_operational": 1, 00:19:17.746 "base_bdevs_list": [ 00:19:17.746 { 00:19:17.746 "name": null, 00:19:17.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.746 "is_configured": false, 00:19:17.746 "data_offset": 0, 00:19:17.746 "data_size": 7936 00:19:17.746 }, 00:19:17.746 { 00:19:17.746 "name": "BaseBdev2", 00:19:17.746 "uuid": "8f8142f4-1b8b-5c40-97de-cb3de07e62ac", 00:19:17.746 "is_configured": true, 00:19:17.746 "data_offset": 256, 00:19:17.746 "data_size": 7936 00:19:17.746 } 00:19:17.746 ] 00:19:17.746 }' 00:19:17.746 16:34:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:17.746 16:34:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:18.006 16:34:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:18.006 16:34:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.006 16:34:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:18.006 [2024-12-06 16:34:59.789446] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:18.006 [2024-12-06 16:34:59.789571] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:18.006 [2024-12-06 16:34:59.789615] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:18.006 [2024-12-06 16:34:59.789646] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:18.006 [2024-12-06 16:34:59.789888] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:18.006 [2024-12-06 16:34:59.789938] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:18.006 [2024-12-06 16:34:59.790021] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:18.006 [2024-12-06 16:34:59.790056] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:18.006 [2024-12-06 16:34:59.790119] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:18.006 [2024-12-06 16:34:59.790192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:18.006 [2024-12-06 16:34:59.792591] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:19:18.006 [2024-12-06 16:34:59.794479] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:18.006 spare 00:19:18.006 16:34:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.006 16:34:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:19.386 16:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:19.386 16:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:19.386 16:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:19.386 16:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:19.386 16:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:19.386 16:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.386 16:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.386 16:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.386 16:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:19.386 16:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.386 16:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:19.386 "name": "raid_bdev1", 00:19:19.386 "uuid": "7df28ea4-6867-425a-b359-b82034a97ff7", 00:19:19.386 "strip_size_kb": 0, 00:19:19.386 "state": "online", 00:19:19.386 "raid_level": "raid1", 00:19:19.386 "superblock": true, 00:19:19.386 "num_base_bdevs": 2, 00:19:19.386 "num_base_bdevs_discovered": 2, 00:19:19.386 "num_base_bdevs_operational": 2, 00:19:19.386 "process": { 00:19:19.386 "type": "rebuild", 00:19:19.386 "target": "spare", 00:19:19.386 "progress": { 00:19:19.386 "blocks": 2560, 00:19:19.386 "percent": 32 00:19:19.386 } 00:19:19.386 }, 00:19:19.386 "base_bdevs_list": [ 00:19:19.386 { 00:19:19.386 "name": "spare", 00:19:19.386 "uuid": "1f05d5f8-b492-5ef6-bfc2-ca878cbc5a45", 00:19:19.386 "is_configured": true, 00:19:19.386 "data_offset": 256, 00:19:19.386 "data_size": 7936 00:19:19.386 }, 00:19:19.386 { 00:19:19.386 "name": "BaseBdev2", 00:19:19.386 "uuid": "8f8142f4-1b8b-5c40-97de-cb3de07e62ac", 00:19:19.386 "is_configured": true, 00:19:19.386 "data_offset": 256, 00:19:19.386 "data_size": 7936 00:19:19.386 } 00:19:19.386 ] 00:19:19.386 }' 00:19:19.386 16:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:19.386 16:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:19.386 16:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:19.386 16:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:19.386 16:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:19.386 16:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.386 16:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:19.386 [2024-12-06 16:35:00.953381] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:19.386 [2024-12-06 16:35:00.998699] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:19.386 [2024-12-06 16:35:00.998822] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:19.386 [2024-12-06 16:35:00.998858] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:19.386 [2024-12-06 16:35:00.998881] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:19.386 16:35:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.386 16:35:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:19.386 16:35:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:19.386 16:35:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:19.386 16:35:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:19.386 16:35:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:19.386 16:35:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:19.386 16:35:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:19.386 16:35:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:19.386 16:35:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:19.386 16:35:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:19.386 16:35:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.386 16:35:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.386 16:35:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.386 16:35:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:19.386 16:35:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.386 16:35:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:19.386 "name": "raid_bdev1", 00:19:19.386 "uuid": "7df28ea4-6867-425a-b359-b82034a97ff7", 00:19:19.386 "strip_size_kb": 0, 00:19:19.386 "state": "online", 00:19:19.386 "raid_level": "raid1", 00:19:19.386 "superblock": true, 00:19:19.386 "num_base_bdevs": 2, 00:19:19.386 "num_base_bdevs_discovered": 1, 00:19:19.386 "num_base_bdevs_operational": 1, 00:19:19.386 "base_bdevs_list": [ 00:19:19.386 { 00:19:19.386 "name": null, 00:19:19.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.386 "is_configured": false, 00:19:19.386 "data_offset": 0, 00:19:19.386 "data_size": 7936 00:19:19.386 }, 00:19:19.386 { 00:19:19.386 "name": "BaseBdev2", 00:19:19.386 "uuid": "8f8142f4-1b8b-5c40-97de-cb3de07e62ac", 00:19:19.386 "is_configured": true, 00:19:19.386 "data_offset": 256, 00:19:19.386 "data_size": 7936 00:19:19.386 } 00:19:19.386 ] 00:19:19.386 }' 00:19:19.386 16:35:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:19.386 16:35:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:19.645 16:35:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:19.645 16:35:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:19.645 16:35:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:19.645 16:35:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:19.645 16:35:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:19.645 16:35:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.645 16:35:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.645 16:35:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.645 16:35:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:19.645 16:35:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.904 16:35:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:19.904 "name": "raid_bdev1", 00:19:19.904 "uuid": "7df28ea4-6867-425a-b359-b82034a97ff7", 00:19:19.904 "strip_size_kb": 0, 00:19:19.904 "state": "online", 00:19:19.904 "raid_level": "raid1", 00:19:19.904 "superblock": true, 00:19:19.904 "num_base_bdevs": 2, 00:19:19.904 "num_base_bdevs_discovered": 1, 00:19:19.904 "num_base_bdevs_operational": 1, 00:19:19.904 "base_bdevs_list": [ 00:19:19.904 { 00:19:19.904 "name": null, 00:19:19.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.904 "is_configured": false, 00:19:19.904 "data_offset": 0, 00:19:19.904 "data_size": 7936 00:19:19.904 }, 00:19:19.904 { 00:19:19.904 "name": "BaseBdev2", 00:19:19.904 "uuid": "8f8142f4-1b8b-5c40-97de-cb3de07e62ac", 00:19:19.904 "is_configured": true, 00:19:19.904 "data_offset": 256, 00:19:19.904 "data_size": 7936 00:19:19.904 } 00:19:19.904 ] 00:19:19.904 }' 00:19:19.904 16:35:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:19.904 16:35:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:19.904 16:35:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:19.904 16:35:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:19.904 16:35:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:19.904 16:35:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.904 16:35:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:19.904 16:35:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.904 16:35:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:19.904 16:35:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.904 16:35:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:19.904 [2024-12-06 16:35:01.589052] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:19.904 [2024-12-06 16:35:01.589147] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:19.904 [2024-12-06 16:35:01.589187] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:19.904 [2024-12-06 16:35:01.589247] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:19.904 [2024-12-06 16:35:01.589506] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:19.904 [2024-12-06 16:35:01.589555] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:19.904 [2024-12-06 16:35:01.589629] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:19.904 [2024-12-06 16:35:01.589672] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:19.904 [2024-12-06 16:35:01.589722] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:19.904 [2024-12-06 16:35:01.589777] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:19.904 BaseBdev1 00:19:19.904 16:35:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.904 16:35:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:20.844 16:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:20.844 16:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:20.844 16:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:20.844 16:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:20.844 16:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:20.844 16:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:20.844 16:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.844 16:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.844 16:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.844 16:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.844 16:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.844 16:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.844 16:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.844 16:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:20.844 16:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.844 16:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.844 "name": "raid_bdev1", 00:19:20.844 "uuid": "7df28ea4-6867-425a-b359-b82034a97ff7", 00:19:20.844 "strip_size_kb": 0, 00:19:20.844 "state": "online", 00:19:20.844 "raid_level": "raid1", 00:19:20.844 "superblock": true, 00:19:20.844 "num_base_bdevs": 2, 00:19:20.844 "num_base_bdevs_discovered": 1, 00:19:20.844 "num_base_bdevs_operational": 1, 00:19:20.844 "base_bdevs_list": [ 00:19:20.844 { 00:19:20.844 "name": null, 00:19:20.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.844 "is_configured": false, 00:19:20.844 "data_offset": 0, 00:19:20.844 "data_size": 7936 00:19:20.844 }, 00:19:20.844 { 00:19:20.844 "name": "BaseBdev2", 00:19:20.844 "uuid": "8f8142f4-1b8b-5c40-97de-cb3de07e62ac", 00:19:20.844 "is_configured": true, 00:19:20.844 "data_offset": 256, 00:19:20.844 "data_size": 7936 00:19:20.844 } 00:19:20.844 ] 00:19:20.844 }' 00:19:20.844 16:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.844 16:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:21.411 16:35:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:21.411 16:35:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:21.411 16:35:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:21.411 16:35:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:21.411 16:35:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:21.411 16:35:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.411 16:35:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.411 16:35:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.411 16:35:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:21.411 16:35:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.411 16:35:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:21.411 "name": "raid_bdev1", 00:19:21.411 "uuid": "7df28ea4-6867-425a-b359-b82034a97ff7", 00:19:21.411 "strip_size_kb": 0, 00:19:21.411 "state": "online", 00:19:21.411 "raid_level": "raid1", 00:19:21.411 "superblock": true, 00:19:21.411 "num_base_bdevs": 2, 00:19:21.411 "num_base_bdevs_discovered": 1, 00:19:21.411 "num_base_bdevs_operational": 1, 00:19:21.411 "base_bdevs_list": [ 00:19:21.411 { 00:19:21.411 "name": null, 00:19:21.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.411 "is_configured": false, 00:19:21.411 "data_offset": 0, 00:19:21.411 "data_size": 7936 00:19:21.411 }, 00:19:21.411 { 00:19:21.411 "name": "BaseBdev2", 00:19:21.411 "uuid": "8f8142f4-1b8b-5c40-97de-cb3de07e62ac", 00:19:21.411 "is_configured": true, 00:19:21.411 "data_offset": 256, 00:19:21.411 "data_size": 7936 00:19:21.411 } 00:19:21.411 ] 00:19:21.411 }' 00:19:21.411 16:35:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:21.411 16:35:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:21.411 16:35:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:21.411 16:35:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:21.411 16:35:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:21.411 16:35:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:19:21.411 16:35:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:21.411 16:35:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:21.411 16:35:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:21.411 16:35:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:21.411 16:35:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:21.411 16:35:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:21.411 16:35:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.411 16:35:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:21.411 [2024-12-06 16:35:03.218373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:21.411 [2024-12-06 16:35:03.218602] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:21.411 [2024-12-06 16:35:03.218658] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:21.411 request: 00:19:21.411 { 00:19:21.411 "base_bdev": "BaseBdev1", 00:19:21.411 "raid_bdev": "raid_bdev1", 00:19:21.411 "method": "bdev_raid_add_base_bdev", 00:19:21.411 "req_id": 1 00:19:21.411 } 00:19:21.411 Got JSON-RPC error response 00:19:21.412 response: 00:19:21.412 { 00:19:21.412 "code": -22, 00:19:21.412 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:21.412 } 00:19:21.412 16:35:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:21.412 16:35:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:19:21.412 16:35:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:21.412 16:35:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:21.412 16:35:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:21.412 16:35:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:22.788 16:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:22.788 16:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:22.788 16:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:22.788 16:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:22.788 16:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:22.788 16:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:22.788 16:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:22.788 16:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:22.788 16:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:22.788 16:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:22.788 16:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.788 16:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.788 16:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.788 16:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:22.788 16:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.788 16:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:22.788 "name": "raid_bdev1", 00:19:22.788 "uuid": "7df28ea4-6867-425a-b359-b82034a97ff7", 00:19:22.788 "strip_size_kb": 0, 00:19:22.788 "state": "online", 00:19:22.788 "raid_level": "raid1", 00:19:22.788 "superblock": true, 00:19:22.788 "num_base_bdevs": 2, 00:19:22.788 "num_base_bdevs_discovered": 1, 00:19:22.788 "num_base_bdevs_operational": 1, 00:19:22.788 "base_bdevs_list": [ 00:19:22.788 { 00:19:22.788 "name": null, 00:19:22.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.788 "is_configured": false, 00:19:22.788 "data_offset": 0, 00:19:22.788 "data_size": 7936 00:19:22.788 }, 00:19:22.788 { 00:19:22.788 "name": "BaseBdev2", 00:19:22.788 "uuid": "8f8142f4-1b8b-5c40-97de-cb3de07e62ac", 00:19:22.788 "is_configured": true, 00:19:22.788 "data_offset": 256, 00:19:22.788 "data_size": 7936 00:19:22.788 } 00:19:22.788 ] 00:19:22.788 }' 00:19:22.788 16:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:22.788 16:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:23.050 16:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:23.050 16:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:23.050 16:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:23.050 16:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:23.050 16:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:23.050 16:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.050 16:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.050 16:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.050 16:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:23.050 16:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.050 16:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:23.050 "name": "raid_bdev1", 00:19:23.050 "uuid": "7df28ea4-6867-425a-b359-b82034a97ff7", 00:19:23.050 "strip_size_kb": 0, 00:19:23.050 "state": "online", 00:19:23.050 "raid_level": "raid1", 00:19:23.050 "superblock": true, 00:19:23.050 "num_base_bdevs": 2, 00:19:23.050 "num_base_bdevs_discovered": 1, 00:19:23.050 "num_base_bdevs_operational": 1, 00:19:23.050 "base_bdevs_list": [ 00:19:23.050 { 00:19:23.050 "name": null, 00:19:23.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.050 "is_configured": false, 00:19:23.050 "data_offset": 0, 00:19:23.050 "data_size": 7936 00:19:23.050 }, 00:19:23.050 { 00:19:23.050 "name": "BaseBdev2", 00:19:23.050 "uuid": "8f8142f4-1b8b-5c40-97de-cb3de07e62ac", 00:19:23.050 "is_configured": true, 00:19:23.050 "data_offset": 256, 00:19:23.050 "data_size": 7936 00:19:23.050 } 00:19:23.050 ] 00:19:23.050 }' 00:19:23.050 16:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:23.050 16:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:23.050 16:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:23.050 16:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:23.050 16:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 98575 00:19:23.050 16:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 98575 ']' 00:19:23.050 16:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 98575 00:19:23.050 16:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:19:23.050 16:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:23.050 16:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98575 00:19:23.050 16:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:23.050 killing process with pid 98575 00:19:23.050 Received shutdown signal, test time was about 60.000000 seconds 00:19:23.050 00:19:23.050 Latency(us) 00:19:23.050 [2024-12-06T16:35:04.889Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.050 [2024-12-06T16:35:04.889Z] =================================================================================================================== 00:19:23.050 [2024-12-06T16:35:04.889Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:23.050 16:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:23.050 16:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98575' 00:19:23.050 16:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 98575 00:19:23.050 [2024-12-06 16:35:04.838002] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:23.050 [2024-12-06 16:35:04.838127] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:23.050 16:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 98575 00:19:23.050 [2024-12-06 16:35:04.838177] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:23.050 [2024-12-06 16:35:04.838187] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:19:23.050 [2024-12-06 16:35:04.871445] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:23.316 16:35:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:19:23.316 00:19:23.316 real 0m18.221s 00:19:23.316 user 0m24.315s 00:19:23.316 sys 0m2.421s 00:19:23.316 ************************************ 00:19:23.316 END TEST raid_rebuild_test_sb_md_separate 00:19:23.316 ************************************ 00:19:23.316 16:35:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:23.316 16:35:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:23.316 16:35:05 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:19:23.316 16:35:05 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:19:23.316 16:35:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:23.316 16:35:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:23.316 16:35:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:23.580 ************************************ 00:19:23.580 START TEST raid_state_function_test_sb_md_interleaved 00:19:23.580 ************************************ 00:19:23.580 16:35:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:19:23.580 16:35:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:19:23.580 16:35:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:23.580 16:35:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:23.580 16:35:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:23.580 16:35:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:23.580 16:35:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:23.580 16:35:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:23.580 16:35:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:23.580 16:35:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:23.580 16:35:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:23.580 16:35:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:23.580 16:35:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:23.580 16:35:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:23.580 16:35:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:23.580 16:35:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:23.580 16:35:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:23.580 16:35:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:23.580 16:35:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:23.580 16:35:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:19:23.580 16:35:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:19:23.580 16:35:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:23.580 16:35:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:23.580 16:35:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=99252 00:19:23.580 16:35:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:23.580 16:35:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 99252' 00:19:23.580 Process raid pid: 99252 00:19:23.580 16:35:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 99252 00:19:23.580 16:35:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 99252 ']' 00:19:23.580 16:35:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.580 16:35:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:23.580 16:35:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.580 16:35:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:23.580 16:35:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.580 [2024-12-06 16:35:05.245913] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:19:23.580 [2024-12-06 16:35:05.246109] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:23.580 [2024-12-06 16:35:05.417321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.838 [2024-12-06 16:35:05.443695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.838 [2024-12-06 16:35:05.486743] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:23.838 [2024-12-06 16:35:05.486872] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:24.418 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:24.418 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:19:24.418 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:24.418 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.418 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:24.418 [2024-12-06 16:35:06.137516] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:24.418 [2024-12-06 16:35:06.137614] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:24.418 [2024-12-06 16:35:06.137680] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:24.418 [2024-12-06 16:35:06.137710] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:24.418 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.418 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:24.418 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:24.418 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:24.418 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:24.418 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:24.418 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:24.418 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:24.418 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:24.418 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:24.418 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:24.418 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.418 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:24.418 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.418 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:24.418 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.418 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:24.418 "name": "Existed_Raid", 00:19:24.418 "uuid": "70357c81-0ff6-438b-a913-9273c11ba39f", 00:19:24.418 "strip_size_kb": 0, 00:19:24.418 "state": "configuring", 00:19:24.418 "raid_level": "raid1", 00:19:24.418 "superblock": true, 00:19:24.418 "num_base_bdevs": 2, 00:19:24.418 "num_base_bdevs_discovered": 0, 00:19:24.418 "num_base_bdevs_operational": 2, 00:19:24.418 "base_bdevs_list": [ 00:19:24.418 { 00:19:24.418 "name": "BaseBdev1", 00:19:24.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.418 "is_configured": false, 00:19:24.418 "data_offset": 0, 00:19:24.418 "data_size": 0 00:19:24.418 }, 00:19:24.418 { 00:19:24.418 "name": "BaseBdev2", 00:19:24.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.418 "is_configured": false, 00:19:24.418 "data_offset": 0, 00:19:24.418 "data_size": 0 00:19:24.418 } 00:19:24.418 ] 00:19:24.418 }' 00:19:24.418 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:24.418 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:24.986 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:24.986 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.986 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:24.986 [2024-12-06 16:35:06.548736] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:24.986 [2024-12-06 16:35:06.548838] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:19:24.986 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.986 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:24.986 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.986 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:24.986 [2024-12-06 16:35:06.560714] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:24.986 [2024-12-06 16:35:06.560788] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:24.986 [2024-12-06 16:35:06.560832] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:24.986 [2024-12-06 16:35:06.560861] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:24.986 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.986 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:19:24.986 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.986 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:24.986 [2024-12-06 16:35:06.581609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:24.986 BaseBdev1 00:19:24.986 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.986 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:24.986 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:24.986 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:24.986 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:19:24.986 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:24.986 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:24.986 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:24.986 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.986 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:24.986 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.986 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:24.986 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.986 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:24.986 [ 00:19:24.986 { 00:19:24.986 "name": "BaseBdev1", 00:19:24.986 "aliases": [ 00:19:24.986 "88c3cfb4-d016-4814-8801-c1bc050ebb7d" 00:19:24.986 ], 00:19:24.986 "product_name": "Malloc disk", 00:19:24.986 "block_size": 4128, 00:19:24.986 "num_blocks": 8192, 00:19:24.986 "uuid": "88c3cfb4-d016-4814-8801-c1bc050ebb7d", 00:19:24.986 "md_size": 32, 00:19:24.986 "md_interleave": true, 00:19:24.986 "dif_type": 0, 00:19:24.986 "assigned_rate_limits": { 00:19:24.986 "rw_ios_per_sec": 0, 00:19:24.986 "rw_mbytes_per_sec": 0, 00:19:24.986 "r_mbytes_per_sec": 0, 00:19:24.986 "w_mbytes_per_sec": 0 00:19:24.986 }, 00:19:24.986 "claimed": true, 00:19:24.986 "claim_type": "exclusive_write", 00:19:24.986 "zoned": false, 00:19:24.986 "supported_io_types": { 00:19:24.986 "read": true, 00:19:24.986 "write": true, 00:19:24.986 "unmap": true, 00:19:24.986 "flush": true, 00:19:24.986 "reset": true, 00:19:24.986 "nvme_admin": false, 00:19:24.986 "nvme_io": false, 00:19:24.986 "nvme_io_md": false, 00:19:24.986 "write_zeroes": true, 00:19:24.986 "zcopy": true, 00:19:24.986 "get_zone_info": false, 00:19:24.986 "zone_management": false, 00:19:24.986 "zone_append": false, 00:19:24.986 "compare": false, 00:19:24.986 "compare_and_write": false, 00:19:24.986 "abort": true, 00:19:24.986 "seek_hole": false, 00:19:24.986 "seek_data": false, 00:19:24.986 "copy": true, 00:19:24.986 "nvme_iov_md": false 00:19:24.986 }, 00:19:24.986 "memory_domains": [ 00:19:24.986 { 00:19:24.986 "dma_device_id": "system", 00:19:24.986 "dma_device_type": 1 00:19:24.986 }, 00:19:24.986 { 00:19:24.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:24.986 "dma_device_type": 2 00:19:24.986 } 00:19:24.986 ], 00:19:24.986 "driver_specific": {} 00:19:24.986 } 00:19:24.986 ] 00:19:24.986 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.986 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:19:24.986 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:24.986 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:24.986 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:24.986 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:24.986 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:24.986 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:24.986 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:24.986 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:24.986 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:24.986 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:24.986 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.986 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.986 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:24.986 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:24.986 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.986 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:24.986 "name": "Existed_Raid", 00:19:24.986 "uuid": "e329caa8-22af-4030-8b29-ece6c7738479", 00:19:24.986 "strip_size_kb": 0, 00:19:24.986 "state": "configuring", 00:19:24.986 "raid_level": "raid1", 00:19:24.986 "superblock": true, 00:19:24.986 "num_base_bdevs": 2, 00:19:24.986 "num_base_bdevs_discovered": 1, 00:19:24.986 "num_base_bdevs_operational": 2, 00:19:24.986 "base_bdevs_list": [ 00:19:24.986 { 00:19:24.986 "name": "BaseBdev1", 00:19:24.986 "uuid": "88c3cfb4-d016-4814-8801-c1bc050ebb7d", 00:19:24.986 "is_configured": true, 00:19:24.986 "data_offset": 256, 00:19:24.986 "data_size": 7936 00:19:24.986 }, 00:19:24.986 { 00:19:24.986 "name": "BaseBdev2", 00:19:24.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.986 "is_configured": false, 00:19:24.986 "data_offset": 0, 00:19:24.986 "data_size": 0 00:19:24.986 } 00:19:24.986 ] 00:19:24.987 }' 00:19:24.987 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:24.987 16:35:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.246 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:25.246 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.246 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.246 [2024-12-06 16:35:07.048905] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:25.246 [2024-12-06 16:35:07.048989] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:19:25.246 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.246 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:25.246 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.246 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.246 [2024-12-06 16:35:07.060897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:25.246 [2024-12-06 16:35:07.062790] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:25.246 [2024-12-06 16:35:07.062862] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:25.246 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.246 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:25.246 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:25.246 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:25.246 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:25.246 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:25.246 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:25.246 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:25.246 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:25.246 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:25.246 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:25.246 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:25.246 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:25.246 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.246 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.246 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.246 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:25.506 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.506 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:25.506 "name": "Existed_Raid", 00:19:25.506 "uuid": "fa05ae0c-2891-4ed7-a6a3-ea5d82739db5", 00:19:25.506 "strip_size_kb": 0, 00:19:25.506 "state": "configuring", 00:19:25.506 "raid_level": "raid1", 00:19:25.506 "superblock": true, 00:19:25.506 "num_base_bdevs": 2, 00:19:25.506 "num_base_bdevs_discovered": 1, 00:19:25.506 "num_base_bdevs_operational": 2, 00:19:25.506 "base_bdevs_list": [ 00:19:25.506 { 00:19:25.506 "name": "BaseBdev1", 00:19:25.506 "uuid": "88c3cfb4-d016-4814-8801-c1bc050ebb7d", 00:19:25.506 "is_configured": true, 00:19:25.506 "data_offset": 256, 00:19:25.506 "data_size": 7936 00:19:25.506 }, 00:19:25.506 { 00:19:25.506 "name": "BaseBdev2", 00:19:25.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.506 "is_configured": false, 00:19:25.506 "data_offset": 0, 00:19:25.506 "data_size": 0 00:19:25.506 } 00:19:25.506 ] 00:19:25.506 }' 00:19:25.506 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:25.506 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.765 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:19:25.765 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.765 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.765 [2024-12-06 16:35:07.443334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:25.765 [2024-12-06 16:35:07.443595] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:19:25.765 [2024-12-06 16:35:07.443650] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:25.765 [2024-12-06 16:35:07.443772] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:19:25.765 [2024-12-06 16:35:07.443887] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:19:25.765 [2024-12-06 16:35:07.443937] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:19:25.765 BaseBdev2 00:19:25.765 [2024-12-06 16:35:07.444042] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:25.765 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.765 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:25.765 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:25.765 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:25.765 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:19:25.765 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:25.765 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:25.765 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:25.765 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.765 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.765 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.765 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:25.765 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.766 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.766 [ 00:19:25.766 { 00:19:25.766 "name": "BaseBdev2", 00:19:25.766 "aliases": [ 00:19:25.766 "39ea85e6-1852-4e5e-9128-6fca506d344c" 00:19:25.766 ], 00:19:25.766 "product_name": "Malloc disk", 00:19:25.766 "block_size": 4128, 00:19:25.766 "num_blocks": 8192, 00:19:25.766 "uuid": "39ea85e6-1852-4e5e-9128-6fca506d344c", 00:19:25.766 "md_size": 32, 00:19:25.766 "md_interleave": true, 00:19:25.766 "dif_type": 0, 00:19:25.766 "assigned_rate_limits": { 00:19:25.766 "rw_ios_per_sec": 0, 00:19:25.766 "rw_mbytes_per_sec": 0, 00:19:25.766 "r_mbytes_per_sec": 0, 00:19:25.766 "w_mbytes_per_sec": 0 00:19:25.766 }, 00:19:25.766 "claimed": true, 00:19:25.766 "claim_type": "exclusive_write", 00:19:25.766 "zoned": false, 00:19:25.766 "supported_io_types": { 00:19:25.766 "read": true, 00:19:25.766 "write": true, 00:19:25.766 "unmap": true, 00:19:25.766 "flush": true, 00:19:25.766 "reset": true, 00:19:25.766 "nvme_admin": false, 00:19:25.766 "nvme_io": false, 00:19:25.766 "nvme_io_md": false, 00:19:25.766 "write_zeroes": true, 00:19:25.766 "zcopy": true, 00:19:25.766 "get_zone_info": false, 00:19:25.766 "zone_management": false, 00:19:25.766 "zone_append": false, 00:19:25.766 "compare": false, 00:19:25.766 "compare_and_write": false, 00:19:25.766 "abort": true, 00:19:25.766 "seek_hole": false, 00:19:25.766 "seek_data": false, 00:19:25.766 "copy": true, 00:19:25.766 "nvme_iov_md": false 00:19:25.766 }, 00:19:25.766 "memory_domains": [ 00:19:25.766 { 00:19:25.766 "dma_device_id": "system", 00:19:25.766 "dma_device_type": 1 00:19:25.766 }, 00:19:25.766 { 00:19:25.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:25.766 "dma_device_type": 2 00:19:25.766 } 00:19:25.766 ], 00:19:25.766 "driver_specific": {} 00:19:25.766 } 00:19:25.766 ] 00:19:25.766 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.766 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:19:25.766 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:25.766 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:25.766 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:25.766 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:25.766 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:25.766 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:25.766 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:25.766 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:25.766 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:25.766 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:25.766 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:25.766 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:25.766 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.766 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:25.766 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.766 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.766 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.766 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:25.766 "name": "Existed_Raid", 00:19:25.766 "uuid": "fa05ae0c-2891-4ed7-a6a3-ea5d82739db5", 00:19:25.766 "strip_size_kb": 0, 00:19:25.766 "state": "online", 00:19:25.766 "raid_level": "raid1", 00:19:25.766 "superblock": true, 00:19:25.766 "num_base_bdevs": 2, 00:19:25.766 "num_base_bdevs_discovered": 2, 00:19:25.766 "num_base_bdevs_operational": 2, 00:19:25.766 "base_bdevs_list": [ 00:19:25.766 { 00:19:25.766 "name": "BaseBdev1", 00:19:25.766 "uuid": "88c3cfb4-d016-4814-8801-c1bc050ebb7d", 00:19:25.766 "is_configured": true, 00:19:25.766 "data_offset": 256, 00:19:25.766 "data_size": 7936 00:19:25.766 }, 00:19:25.766 { 00:19:25.766 "name": "BaseBdev2", 00:19:25.766 "uuid": "39ea85e6-1852-4e5e-9128-6fca506d344c", 00:19:25.766 "is_configured": true, 00:19:25.766 "data_offset": 256, 00:19:25.766 "data_size": 7936 00:19:25.766 } 00:19:25.766 ] 00:19:25.766 }' 00:19:25.766 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:25.766 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:26.336 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:26.336 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:26.336 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:26.336 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:26.336 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:19:26.336 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:26.336 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:26.336 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:26.336 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.336 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:26.336 [2024-12-06 16:35:07.922880] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:26.336 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.336 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:26.336 "name": "Existed_Raid", 00:19:26.336 "aliases": [ 00:19:26.336 "fa05ae0c-2891-4ed7-a6a3-ea5d82739db5" 00:19:26.336 ], 00:19:26.336 "product_name": "Raid Volume", 00:19:26.336 "block_size": 4128, 00:19:26.336 "num_blocks": 7936, 00:19:26.336 "uuid": "fa05ae0c-2891-4ed7-a6a3-ea5d82739db5", 00:19:26.336 "md_size": 32, 00:19:26.336 "md_interleave": true, 00:19:26.336 "dif_type": 0, 00:19:26.336 "assigned_rate_limits": { 00:19:26.336 "rw_ios_per_sec": 0, 00:19:26.336 "rw_mbytes_per_sec": 0, 00:19:26.336 "r_mbytes_per_sec": 0, 00:19:26.336 "w_mbytes_per_sec": 0 00:19:26.336 }, 00:19:26.336 "claimed": false, 00:19:26.336 "zoned": false, 00:19:26.336 "supported_io_types": { 00:19:26.336 "read": true, 00:19:26.336 "write": true, 00:19:26.336 "unmap": false, 00:19:26.336 "flush": false, 00:19:26.336 "reset": true, 00:19:26.336 "nvme_admin": false, 00:19:26.336 "nvme_io": false, 00:19:26.336 "nvme_io_md": false, 00:19:26.336 "write_zeroes": true, 00:19:26.336 "zcopy": false, 00:19:26.336 "get_zone_info": false, 00:19:26.336 "zone_management": false, 00:19:26.336 "zone_append": false, 00:19:26.336 "compare": false, 00:19:26.336 "compare_and_write": false, 00:19:26.336 "abort": false, 00:19:26.336 "seek_hole": false, 00:19:26.336 "seek_data": false, 00:19:26.336 "copy": false, 00:19:26.336 "nvme_iov_md": false 00:19:26.336 }, 00:19:26.336 "memory_domains": [ 00:19:26.336 { 00:19:26.336 "dma_device_id": "system", 00:19:26.336 "dma_device_type": 1 00:19:26.336 }, 00:19:26.336 { 00:19:26.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:26.336 "dma_device_type": 2 00:19:26.336 }, 00:19:26.336 { 00:19:26.336 "dma_device_id": "system", 00:19:26.336 "dma_device_type": 1 00:19:26.336 }, 00:19:26.336 { 00:19:26.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:26.336 "dma_device_type": 2 00:19:26.336 } 00:19:26.336 ], 00:19:26.336 "driver_specific": { 00:19:26.336 "raid": { 00:19:26.336 "uuid": "fa05ae0c-2891-4ed7-a6a3-ea5d82739db5", 00:19:26.336 "strip_size_kb": 0, 00:19:26.336 "state": "online", 00:19:26.336 "raid_level": "raid1", 00:19:26.336 "superblock": true, 00:19:26.336 "num_base_bdevs": 2, 00:19:26.336 "num_base_bdevs_discovered": 2, 00:19:26.336 "num_base_bdevs_operational": 2, 00:19:26.336 "base_bdevs_list": [ 00:19:26.336 { 00:19:26.336 "name": "BaseBdev1", 00:19:26.336 "uuid": "88c3cfb4-d016-4814-8801-c1bc050ebb7d", 00:19:26.336 "is_configured": true, 00:19:26.336 "data_offset": 256, 00:19:26.336 "data_size": 7936 00:19:26.336 }, 00:19:26.336 { 00:19:26.336 "name": "BaseBdev2", 00:19:26.336 "uuid": "39ea85e6-1852-4e5e-9128-6fca506d344c", 00:19:26.336 "is_configured": true, 00:19:26.336 "data_offset": 256, 00:19:26.336 "data_size": 7936 00:19:26.336 } 00:19:26.336 ] 00:19:26.336 } 00:19:26.336 } 00:19:26.336 }' 00:19:26.336 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:26.336 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:26.336 BaseBdev2' 00:19:26.336 16:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:26.336 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:19:26.336 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:26.336 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:26.336 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:26.336 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.336 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:26.336 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.336 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:26.336 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:26.336 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:26.336 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:26.336 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.336 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:26.336 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:26.336 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.336 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:26.336 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:26.336 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:26.336 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.336 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:26.336 [2024-12-06 16:35:08.122321] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:26.336 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.336 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:26.336 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:19:26.336 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:26.336 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:19:26.336 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:26.336 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:26.336 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:26.336 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:26.336 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:26.336 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:26.336 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:26.336 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:26.336 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:26.336 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:26.336 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:26.337 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.337 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.337 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:26.337 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:26.337 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.596 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:26.596 "name": "Existed_Raid", 00:19:26.596 "uuid": "fa05ae0c-2891-4ed7-a6a3-ea5d82739db5", 00:19:26.596 "strip_size_kb": 0, 00:19:26.596 "state": "online", 00:19:26.596 "raid_level": "raid1", 00:19:26.596 "superblock": true, 00:19:26.596 "num_base_bdevs": 2, 00:19:26.596 "num_base_bdevs_discovered": 1, 00:19:26.596 "num_base_bdevs_operational": 1, 00:19:26.596 "base_bdevs_list": [ 00:19:26.596 { 00:19:26.596 "name": null, 00:19:26.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.596 "is_configured": false, 00:19:26.596 "data_offset": 0, 00:19:26.596 "data_size": 7936 00:19:26.596 }, 00:19:26.596 { 00:19:26.596 "name": "BaseBdev2", 00:19:26.596 "uuid": "39ea85e6-1852-4e5e-9128-6fca506d344c", 00:19:26.596 "is_configured": true, 00:19:26.596 "data_offset": 256, 00:19:26.596 "data_size": 7936 00:19:26.596 } 00:19:26.596 ] 00:19:26.596 }' 00:19:26.596 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:26.596 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:26.856 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:26.856 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:26.856 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.856 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.856 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:26.856 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:26.856 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.856 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:26.856 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:26.856 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:26.856 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.856 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:26.856 [2024-12-06 16:35:08.569096] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:26.856 [2024-12-06 16:35:08.569266] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:26.856 [2024-12-06 16:35:08.581080] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:26.856 [2024-12-06 16:35:08.581201] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:26.857 [2024-12-06 16:35:08.581260] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:19:26.857 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.857 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:26.857 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:26.857 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.857 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:26.857 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.857 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:26.857 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.857 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:26.857 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:26.857 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:26.857 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 99252 00:19:26.857 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 99252 ']' 00:19:26.857 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 99252 00:19:26.857 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:19:26.857 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:26.857 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99252 00:19:26.857 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:26.857 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:26.857 killing process with pid 99252 00:19:26.857 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99252' 00:19:26.857 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 99252 00:19:26.857 [2024-12-06 16:35:08.678590] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:26.857 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 99252 00:19:26.857 [2024-12-06 16:35:08.679601] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:27.116 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:19:27.116 00:19:27.116 real 0m3.735s 00:19:27.116 user 0m5.862s 00:19:27.116 sys 0m0.784s 00:19:27.116 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:27.116 16:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:27.116 ************************************ 00:19:27.116 END TEST raid_state_function_test_sb_md_interleaved 00:19:27.116 ************************************ 00:19:27.116 16:35:08 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:19:27.116 16:35:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:27.116 16:35:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:27.116 16:35:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:27.377 ************************************ 00:19:27.377 START TEST raid_superblock_test_md_interleaved 00:19:27.377 ************************************ 00:19:27.377 16:35:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:19:27.377 16:35:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:19:27.377 16:35:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:27.377 16:35:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:27.377 16:35:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:27.377 16:35:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:27.377 16:35:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:27.377 16:35:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:27.377 16:35:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:27.377 16:35:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:27.377 16:35:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:27.377 16:35:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:27.377 16:35:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:27.377 16:35:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:27.377 16:35:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:19:27.377 16:35:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:19:27.377 16:35:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=99487 00:19:27.377 16:35:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:27.377 16:35:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 99487 00:19:27.377 16:35:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 99487 ']' 00:19:27.377 16:35:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:27.377 16:35:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:27.377 16:35:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:27.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:27.377 16:35:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:27.377 16:35:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:27.377 [2024-12-06 16:35:09.046009] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:19:27.377 [2024-12-06 16:35:09.046230] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99487 ] 00:19:27.636 [2024-12-06 16:35:09.216212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.636 [2024-12-06 16:35:09.241565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:27.636 [2024-12-06 16:35:09.285737] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:27.636 [2024-12-06 16:35:09.285862] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:28.205 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:28.205 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:19:28.205 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:28.205 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:28.205 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:28.205 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:28.205 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:28.205 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:28.205 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:28.205 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:28.205 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:19:28.205 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.205 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:28.205 malloc1 00:19:28.206 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.206 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:28.206 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.206 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:28.206 [2024-12-06 16:35:09.906261] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:28.206 [2024-12-06 16:35:09.906387] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:28.206 [2024-12-06 16:35:09.906431] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:28.206 [2024-12-06 16:35:09.906463] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:28.206 [2024-12-06 16:35:09.908394] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:28.206 [2024-12-06 16:35:09.908468] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:28.206 pt1 00:19:28.206 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.206 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:28.206 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:28.206 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:28.206 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:28.206 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:28.206 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:28.206 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:28.206 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:28.206 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:19:28.206 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.206 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:28.206 malloc2 00:19:28.206 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.206 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:28.206 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.206 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:28.206 [2024-12-06 16:35:09.938896] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:28.206 [2024-12-06 16:35:09.939002] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:28.206 [2024-12-06 16:35:09.939034] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:28.206 [2024-12-06 16:35:09.939063] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:28.206 [2024-12-06 16:35:09.940940] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:28.206 [2024-12-06 16:35:09.940979] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:28.206 pt2 00:19:28.206 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.206 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:28.206 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:28.206 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:28.206 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.206 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:28.206 [2024-12-06 16:35:09.950910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:28.206 [2024-12-06 16:35:09.952774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:28.206 [2024-12-06 16:35:09.952961] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:19:28.206 [2024-12-06 16:35:09.953028] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:28.206 [2024-12-06 16:35:09.953122] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:19:28.206 [2024-12-06 16:35:09.953246] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:19:28.206 [2024-12-06 16:35:09.953299] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:19:28.206 [2024-12-06 16:35:09.953395] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:28.206 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.206 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:28.206 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:28.206 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:28.206 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:28.206 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:28.206 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:28.206 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:28.206 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:28.206 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:28.206 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:28.206 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:28.206 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.206 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.206 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:28.206 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.206 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:28.206 "name": "raid_bdev1", 00:19:28.206 "uuid": "5cc93ef3-d202-46ba-9279-72dff47b99e3", 00:19:28.206 "strip_size_kb": 0, 00:19:28.206 "state": "online", 00:19:28.206 "raid_level": "raid1", 00:19:28.206 "superblock": true, 00:19:28.206 "num_base_bdevs": 2, 00:19:28.206 "num_base_bdevs_discovered": 2, 00:19:28.206 "num_base_bdevs_operational": 2, 00:19:28.206 "base_bdevs_list": [ 00:19:28.206 { 00:19:28.206 "name": "pt1", 00:19:28.206 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:28.206 "is_configured": true, 00:19:28.206 "data_offset": 256, 00:19:28.206 "data_size": 7936 00:19:28.206 }, 00:19:28.206 { 00:19:28.206 "name": "pt2", 00:19:28.206 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:28.206 "is_configured": true, 00:19:28.206 "data_offset": 256, 00:19:28.206 "data_size": 7936 00:19:28.206 } 00:19:28.206 ] 00:19:28.206 }' 00:19:28.206 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:28.206 16:35:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:28.775 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:28.775 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:28.775 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:28.775 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:28.775 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:19:28.775 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:28.775 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:28.775 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.775 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:28.775 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:28.775 [2024-12-06 16:35:10.362534] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:28.775 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.775 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:28.775 "name": "raid_bdev1", 00:19:28.775 "aliases": [ 00:19:28.775 "5cc93ef3-d202-46ba-9279-72dff47b99e3" 00:19:28.775 ], 00:19:28.775 "product_name": "Raid Volume", 00:19:28.775 "block_size": 4128, 00:19:28.775 "num_blocks": 7936, 00:19:28.775 "uuid": "5cc93ef3-d202-46ba-9279-72dff47b99e3", 00:19:28.775 "md_size": 32, 00:19:28.775 "md_interleave": true, 00:19:28.775 "dif_type": 0, 00:19:28.775 "assigned_rate_limits": { 00:19:28.775 "rw_ios_per_sec": 0, 00:19:28.775 "rw_mbytes_per_sec": 0, 00:19:28.775 "r_mbytes_per_sec": 0, 00:19:28.775 "w_mbytes_per_sec": 0 00:19:28.775 }, 00:19:28.775 "claimed": false, 00:19:28.775 "zoned": false, 00:19:28.775 "supported_io_types": { 00:19:28.775 "read": true, 00:19:28.775 "write": true, 00:19:28.775 "unmap": false, 00:19:28.775 "flush": false, 00:19:28.775 "reset": true, 00:19:28.775 "nvme_admin": false, 00:19:28.775 "nvme_io": false, 00:19:28.775 "nvme_io_md": false, 00:19:28.775 "write_zeroes": true, 00:19:28.775 "zcopy": false, 00:19:28.775 "get_zone_info": false, 00:19:28.775 "zone_management": false, 00:19:28.775 "zone_append": false, 00:19:28.775 "compare": false, 00:19:28.775 "compare_and_write": false, 00:19:28.775 "abort": false, 00:19:28.775 "seek_hole": false, 00:19:28.775 "seek_data": false, 00:19:28.775 "copy": false, 00:19:28.775 "nvme_iov_md": false 00:19:28.775 }, 00:19:28.775 "memory_domains": [ 00:19:28.775 { 00:19:28.775 "dma_device_id": "system", 00:19:28.775 "dma_device_type": 1 00:19:28.775 }, 00:19:28.775 { 00:19:28.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:28.775 "dma_device_type": 2 00:19:28.776 }, 00:19:28.776 { 00:19:28.776 "dma_device_id": "system", 00:19:28.776 "dma_device_type": 1 00:19:28.776 }, 00:19:28.776 { 00:19:28.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:28.776 "dma_device_type": 2 00:19:28.776 } 00:19:28.776 ], 00:19:28.776 "driver_specific": { 00:19:28.776 "raid": { 00:19:28.776 "uuid": "5cc93ef3-d202-46ba-9279-72dff47b99e3", 00:19:28.776 "strip_size_kb": 0, 00:19:28.776 "state": "online", 00:19:28.776 "raid_level": "raid1", 00:19:28.776 "superblock": true, 00:19:28.776 "num_base_bdevs": 2, 00:19:28.776 "num_base_bdevs_discovered": 2, 00:19:28.776 "num_base_bdevs_operational": 2, 00:19:28.776 "base_bdevs_list": [ 00:19:28.776 { 00:19:28.776 "name": "pt1", 00:19:28.776 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:28.776 "is_configured": true, 00:19:28.776 "data_offset": 256, 00:19:28.776 "data_size": 7936 00:19:28.776 }, 00:19:28.776 { 00:19:28.776 "name": "pt2", 00:19:28.776 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:28.776 "is_configured": true, 00:19:28.776 "data_offset": 256, 00:19:28.776 "data_size": 7936 00:19:28.776 } 00:19:28.776 ] 00:19:28.776 } 00:19:28.776 } 00:19:28.776 }' 00:19:28.776 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:28.776 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:28.776 pt2' 00:19:28.776 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:28.776 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:19:28.776 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:28.776 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:28.776 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:28.776 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.776 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:28.776 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.776 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:28.776 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:28.776 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:28.776 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:28.776 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.776 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:28.776 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:28.776 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.776 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:28.776 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:28.776 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:28.776 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:28.776 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.776 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:28.776 [2024-12-06 16:35:10.594030] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:29.036 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.036 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5cc93ef3-d202-46ba-9279-72dff47b99e3 00:19:29.036 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 5cc93ef3-d202-46ba-9279-72dff47b99e3 ']' 00:19:29.036 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:29.036 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.036 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:29.036 [2024-12-06 16:35:10.641699] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:29.036 [2024-12-06 16:35:10.641760] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:29.036 [2024-12-06 16:35:10.641851] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:29.036 [2024-12-06 16:35:10.641930] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:29.036 [2024-12-06 16:35:10.641974] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:19:29.036 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.036 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.036 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.036 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:29.036 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:29.036 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.036 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:29.036 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:29.036 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:29.036 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:29.036 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.036 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:29.036 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.036 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:29.036 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:29.036 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.036 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:29.036 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.036 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:29.036 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.036 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:29.036 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:29.036 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.036 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:29.036 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:29.036 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:19:29.036 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:29.036 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:29.036 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:29.036 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:29.036 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:29.036 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:29.036 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.036 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:29.036 [2024-12-06 16:35:10.785481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:29.036 [2024-12-06 16:35:10.787304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:29.036 [2024-12-06 16:35:10.787361] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:29.036 [2024-12-06 16:35:10.787416] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:29.036 [2024-12-06 16:35:10.787431] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:29.037 [2024-12-06 16:35:10.787446] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:19:29.037 request: 00:19:29.037 { 00:19:29.037 "name": "raid_bdev1", 00:19:29.037 "raid_level": "raid1", 00:19:29.037 "base_bdevs": [ 00:19:29.037 "malloc1", 00:19:29.037 "malloc2" 00:19:29.037 ], 00:19:29.037 "superblock": false, 00:19:29.037 "method": "bdev_raid_create", 00:19:29.037 "req_id": 1 00:19:29.037 } 00:19:29.037 Got JSON-RPC error response 00:19:29.037 response: 00:19:29.037 { 00:19:29.037 "code": -17, 00:19:29.037 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:29.037 } 00:19:29.037 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:29.037 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:19:29.037 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:29.037 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:29.037 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:29.037 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.037 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.037 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:29.037 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:29.037 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.037 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:29.037 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:29.037 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:29.037 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.037 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:29.037 [2024-12-06 16:35:10.853327] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:29.037 [2024-12-06 16:35:10.853413] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:29.037 [2024-12-06 16:35:10.853450] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:29.037 [2024-12-06 16:35:10.853477] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:29.037 [2024-12-06 16:35:10.855393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:29.037 [2024-12-06 16:35:10.855457] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:29.037 [2024-12-06 16:35:10.855534] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:29.037 [2024-12-06 16:35:10.855600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:29.037 pt1 00:19:29.037 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.037 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:29.037 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:29.037 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:29.037 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:29.037 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:29.037 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:29.037 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:29.037 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:29.037 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:29.037 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:29.037 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.037 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.037 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.037 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:29.297 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.297 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:29.297 "name": "raid_bdev1", 00:19:29.297 "uuid": "5cc93ef3-d202-46ba-9279-72dff47b99e3", 00:19:29.297 "strip_size_kb": 0, 00:19:29.297 "state": "configuring", 00:19:29.297 "raid_level": "raid1", 00:19:29.297 "superblock": true, 00:19:29.297 "num_base_bdevs": 2, 00:19:29.297 "num_base_bdevs_discovered": 1, 00:19:29.297 "num_base_bdevs_operational": 2, 00:19:29.297 "base_bdevs_list": [ 00:19:29.297 { 00:19:29.297 "name": "pt1", 00:19:29.297 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:29.297 "is_configured": true, 00:19:29.297 "data_offset": 256, 00:19:29.297 "data_size": 7936 00:19:29.297 }, 00:19:29.297 { 00:19:29.297 "name": null, 00:19:29.297 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:29.297 "is_configured": false, 00:19:29.297 "data_offset": 256, 00:19:29.297 "data_size": 7936 00:19:29.297 } 00:19:29.297 ] 00:19:29.297 }' 00:19:29.297 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:29.297 16:35:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:29.557 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:29.557 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:29.557 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:29.557 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:29.557 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.557 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:29.557 [2024-12-06 16:35:11.244703] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:29.557 [2024-12-06 16:35:11.244810] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:29.557 [2024-12-06 16:35:11.244855] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:29.557 [2024-12-06 16:35:11.244888] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:29.557 [2024-12-06 16:35:11.245063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:29.557 [2024-12-06 16:35:11.245076] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:29.557 [2024-12-06 16:35:11.245128] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:29.557 [2024-12-06 16:35:11.245148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:29.557 [2024-12-06 16:35:11.245239] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:19:29.557 [2024-12-06 16:35:11.245247] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:29.557 [2024-12-06 16:35:11.245327] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:19:29.557 [2024-12-06 16:35:11.245385] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:19:29.557 [2024-12-06 16:35:11.245398] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:19:29.557 [2024-12-06 16:35:11.245453] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:29.557 pt2 00:19:29.557 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.557 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:29.557 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:29.557 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:29.557 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:29.557 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:29.557 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:29.557 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:29.557 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:29.557 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:29.557 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:29.557 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:29.557 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:29.557 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.557 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.557 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.557 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:29.557 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.557 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:29.557 "name": "raid_bdev1", 00:19:29.557 "uuid": "5cc93ef3-d202-46ba-9279-72dff47b99e3", 00:19:29.557 "strip_size_kb": 0, 00:19:29.557 "state": "online", 00:19:29.557 "raid_level": "raid1", 00:19:29.557 "superblock": true, 00:19:29.557 "num_base_bdevs": 2, 00:19:29.557 "num_base_bdevs_discovered": 2, 00:19:29.557 "num_base_bdevs_operational": 2, 00:19:29.557 "base_bdevs_list": [ 00:19:29.557 { 00:19:29.557 "name": "pt1", 00:19:29.557 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:29.557 "is_configured": true, 00:19:29.557 "data_offset": 256, 00:19:29.557 "data_size": 7936 00:19:29.557 }, 00:19:29.557 { 00:19:29.557 "name": "pt2", 00:19:29.557 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:29.557 "is_configured": true, 00:19:29.557 "data_offset": 256, 00:19:29.557 "data_size": 7936 00:19:29.557 } 00:19:29.557 ] 00:19:29.557 }' 00:19:29.557 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:29.557 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:30.126 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:30.126 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:30.126 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:30.126 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:30.126 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:19:30.126 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:30.126 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:30.126 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:30.126 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.126 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:30.126 [2024-12-06 16:35:11.684259] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:30.126 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.126 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:30.126 "name": "raid_bdev1", 00:19:30.126 "aliases": [ 00:19:30.126 "5cc93ef3-d202-46ba-9279-72dff47b99e3" 00:19:30.126 ], 00:19:30.126 "product_name": "Raid Volume", 00:19:30.126 "block_size": 4128, 00:19:30.126 "num_blocks": 7936, 00:19:30.126 "uuid": "5cc93ef3-d202-46ba-9279-72dff47b99e3", 00:19:30.126 "md_size": 32, 00:19:30.126 "md_interleave": true, 00:19:30.126 "dif_type": 0, 00:19:30.126 "assigned_rate_limits": { 00:19:30.126 "rw_ios_per_sec": 0, 00:19:30.127 "rw_mbytes_per_sec": 0, 00:19:30.127 "r_mbytes_per_sec": 0, 00:19:30.127 "w_mbytes_per_sec": 0 00:19:30.127 }, 00:19:30.127 "claimed": false, 00:19:30.127 "zoned": false, 00:19:30.127 "supported_io_types": { 00:19:30.127 "read": true, 00:19:30.127 "write": true, 00:19:30.127 "unmap": false, 00:19:30.127 "flush": false, 00:19:30.127 "reset": true, 00:19:30.127 "nvme_admin": false, 00:19:30.127 "nvme_io": false, 00:19:30.127 "nvme_io_md": false, 00:19:30.127 "write_zeroes": true, 00:19:30.127 "zcopy": false, 00:19:30.127 "get_zone_info": false, 00:19:30.127 "zone_management": false, 00:19:30.127 "zone_append": false, 00:19:30.127 "compare": false, 00:19:30.127 "compare_and_write": false, 00:19:30.127 "abort": false, 00:19:30.127 "seek_hole": false, 00:19:30.127 "seek_data": false, 00:19:30.127 "copy": false, 00:19:30.127 "nvme_iov_md": false 00:19:30.127 }, 00:19:30.127 "memory_domains": [ 00:19:30.127 { 00:19:30.127 "dma_device_id": "system", 00:19:30.127 "dma_device_type": 1 00:19:30.127 }, 00:19:30.127 { 00:19:30.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.127 "dma_device_type": 2 00:19:30.127 }, 00:19:30.127 { 00:19:30.127 "dma_device_id": "system", 00:19:30.127 "dma_device_type": 1 00:19:30.127 }, 00:19:30.127 { 00:19:30.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.127 "dma_device_type": 2 00:19:30.127 } 00:19:30.127 ], 00:19:30.127 "driver_specific": { 00:19:30.127 "raid": { 00:19:30.127 "uuid": "5cc93ef3-d202-46ba-9279-72dff47b99e3", 00:19:30.127 "strip_size_kb": 0, 00:19:30.127 "state": "online", 00:19:30.127 "raid_level": "raid1", 00:19:30.127 "superblock": true, 00:19:30.127 "num_base_bdevs": 2, 00:19:30.127 "num_base_bdevs_discovered": 2, 00:19:30.127 "num_base_bdevs_operational": 2, 00:19:30.127 "base_bdevs_list": [ 00:19:30.127 { 00:19:30.127 "name": "pt1", 00:19:30.127 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:30.127 "is_configured": true, 00:19:30.127 "data_offset": 256, 00:19:30.127 "data_size": 7936 00:19:30.127 }, 00:19:30.127 { 00:19:30.127 "name": "pt2", 00:19:30.127 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:30.127 "is_configured": true, 00:19:30.127 "data_offset": 256, 00:19:30.127 "data_size": 7936 00:19:30.127 } 00:19:30.127 ] 00:19:30.127 } 00:19:30.127 } 00:19:30.127 }' 00:19:30.127 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:30.127 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:30.127 pt2' 00:19:30.127 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:30.127 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:19:30.127 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:30.127 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:30.127 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.127 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:30.127 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:30.127 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.127 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:30.127 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:30.127 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:30.127 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:30.127 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:30.127 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.127 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:30.127 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.127 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:30.127 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:30.127 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:30.127 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.127 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:30.127 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:30.127 [2024-12-06 16:35:11.919794] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:30.127 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.386 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 5cc93ef3-d202-46ba-9279-72dff47b99e3 '!=' 5cc93ef3-d202-46ba-9279-72dff47b99e3 ']' 00:19:30.386 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:30.386 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:30.386 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:19:30.386 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:30.386 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.386 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:30.386 [2024-12-06 16:35:11.971506] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:30.386 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.386 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:30.386 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:30.386 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:30.386 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:30.386 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:30.386 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:30.386 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:30.386 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:30.387 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:30.387 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:30.387 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.387 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.387 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:30.387 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.387 16:35:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.387 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:30.387 "name": "raid_bdev1", 00:19:30.387 "uuid": "5cc93ef3-d202-46ba-9279-72dff47b99e3", 00:19:30.387 "strip_size_kb": 0, 00:19:30.387 "state": "online", 00:19:30.387 "raid_level": "raid1", 00:19:30.387 "superblock": true, 00:19:30.387 "num_base_bdevs": 2, 00:19:30.387 "num_base_bdevs_discovered": 1, 00:19:30.387 "num_base_bdevs_operational": 1, 00:19:30.387 "base_bdevs_list": [ 00:19:30.387 { 00:19:30.387 "name": null, 00:19:30.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.387 "is_configured": false, 00:19:30.387 "data_offset": 0, 00:19:30.387 "data_size": 7936 00:19:30.387 }, 00:19:30.387 { 00:19:30.387 "name": "pt2", 00:19:30.387 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:30.387 "is_configured": true, 00:19:30.387 "data_offset": 256, 00:19:30.387 "data_size": 7936 00:19:30.387 } 00:19:30.387 ] 00:19:30.387 }' 00:19:30.387 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:30.387 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:30.646 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:30.646 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.646 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:30.646 [2024-12-06 16:35:12.378782] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:30.646 [2024-12-06 16:35:12.378810] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:30.646 [2024-12-06 16:35:12.378885] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:30.646 [2024-12-06 16:35:12.378934] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:30.646 [2024-12-06 16:35:12.378951] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:19:30.646 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.646 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.646 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.646 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:30.646 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:30.646 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.646 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:30.646 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:30.646 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:30.646 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:30.646 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:30.646 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.646 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:30.646 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.646 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:30.646 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:30.646 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:30.646 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:30.646 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:19:30.646 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:30.646 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.646 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:30.646 [2024-12-06 16:35:12.450660] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:30.646 [2024-12-06 16:35:12.450745] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:30.646 [2024-12-06 16:35:12.450781] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:30.646 [2024-12-06 16:35:12.450805] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:30.646 [2024-12-06 16:35:12.452704] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:30.646 [2024-12-06 16:35:12.452769] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:30.646 [2024-12-06 16:35:12.452825] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:30.646 [2024-12-06 16:35:12.452859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:30.646 [2024-12-06 16:35:12.452919] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:19:30.646 [2024-12-06 16:35:12.452927] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:30.646 [2024-12-06 16:35:12.453013] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:30.646 [2024-12-06 16:35:12.453071] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:19:30.646 [2024-12-06 16:35:12.453080] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:19:30.646 [2024-12-06 16:35:12.453132] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:30.646 pt2 00:19:30.646 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.646 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:30.646 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:30.646 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:30.646 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:30.646 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:30.647 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:30.647 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:30.647 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:30.647 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:30.647 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:30.647 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.647 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.647 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.647 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:30.647 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.948 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:30.948 "name": "raid_bdev1", 00:19:30.948 "uuid": "5cc93ef3-d202-46ba-9279-72dff47b99e3", 00:19:30.948 "strip_size_kb": 0, 00:19:30.948 "state": "online", 00:19:30.948 "raid_level": "raid1", 00:19:30.948 "superblock": true, 00:19:30.948 "num_base_bdevs": 2, 00:19:30.948 "num_base_bdevs_discovered": 1, 00:19:30.948 "num_base_bdevs_operational": 1, 00:19:30.948 "base_bdevs_list": [ 00:19:30.948 { 00:19:30.948 "name": null, 00:19:30.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.948 "is_configured": false, 00:19:30.948 "data_offset": 256, 00:19:30.948 "data_size": 7936 00:19:30.948 }, 00:19:30.948 { 00:19:30.948 "name": "pt2", 00:19:30.948 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:30.948 "is_configured": true, 00:19:30.948 "data_offset": 256, 00:19:30.948 "data_size": 7936 00:19:30.948 } 00:19:30.948 ] 00:19:30.948 }' 00:19:30.948 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:30.948 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:31.208 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:31.208 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.208 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:31.208 [2024-12-06 16:35:12.873969] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:31.208 [2024-12-06 16:35:12.874034] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:31.208 [2024-12-06 16:35:12.874124] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:31.208 [2024-12-06 16:35:12.874187] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:31.208 [2024-12-06 16:35:12.874251] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:19:31.208 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.208 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.208 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:31.208 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.208 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:31.208 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.208 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:31.208 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:31.208 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:19:31.208 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:31.208 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.208 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:31.208 [2024-12-06 16:35:12.921851] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:31.208 [2024-12-06 16:35:12.921939] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:31.208 [2024-12-06 16:35:12.921970] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:31.208 [2024-12-06 16:35:12.922001] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:31.208 [2024-12-06 16:35:12.923879] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:31.208 [2024-12-06 16:35:12.923965] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:31.208 [2024-12-06 16:35:12.924034] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:31.208 [2024-12-06 16:35:12.924104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:31.208 [2024-12-06 16:35:12.924243] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:31.208 [2024-12-06 16:35:12.924303] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:31.208 [2024-12-06 16:35:12.924347] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:19:31.208 [2024-12-06 16:35:12.924418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:31.208 [2024-12-06 16:35:12.924517] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:19:31.209 [2024-12-06 16:35:12.924560] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:31.209 [2024-12-06 16:35:12.924646] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:31.209 [2024-12-06 16:35:12.924734] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:19:31.209 [2024-12-06 16:35:12.924768] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:19:31.209 [2024-12-06 16:35:12.924867] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:31.209 pt1 00:19:31.209 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.209 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:19:31.209 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:31.209 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:31.209 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:31.209 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:31.209 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:31.209 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:31.209 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:31.209 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:31.209 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:31.209 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:31.209 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.209 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.209 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.209 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:31.209 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.209 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:31.209 "name": "raid_bdev1", 00:19:31.209 "uuid": "5cc93ef3-d202-46ba-9279-72dff47b99e3", 00:19:31.209 "strip_size_kb": 0, 00:19:31.209 "state": "online", 00:19:31.209 "raid_level": "raid1", 00:19:31.209 "superblock": true, 00:19:31.209 "num_base_bdevs": 2, 00:19:31.209 "num_base_bdevs_discovered": 1, 00:19:31.209 "num_base_bdevs_operational": 1, 00:19:31.209 "base_bdevs_list": [ 00:19:31.209 { 00:19:31.209 "name": null, 00:19:31.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.209 "is_configured": false, 00:19:31.209 "data_offset": 256, 00:19:31.209 "data_size": 7936 00:19:31.209 }, 00:19:31.209 { 00:19:31.209 "name": "pt2", 00:19:31.209 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:31.209 "is_configured": true, 00:19:31.209 "data_offset": 256, 00:19:31.209 "data_size": 7936 00:19:31.209 } 00:19:31.209 ] 00:19:31.209 }' 00:19:31.209 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:31.209 16:35:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:31.778 16:35:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:31.778 16:35:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.778 16:35:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:31.778 16:35:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:31.778 16:35:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.778 16:35:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:31.778 16:35:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:31.778 16:35:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:31.778 16:35:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.778 16:35:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:31.778 [2024-12-06 16:35:13.365357] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:31.778 16:35:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.778 16:35:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 5cc93ef3-d202-46ba-9279-72dff47b99e3 '!=' 5cc93ef3-d202-46ba-9279-72dff47b99e3 ']' 00:19:31.778 16:35:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 99487 00:19:31.778 16:35:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 99487 ']' 00:19:31.778 16:35:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 99487 00:19:31.778 16:35:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:19:31.778 16:35:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:31.778 16:35:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99487 00:19:31.778 16:35:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:31.778 killing process with pid 99487 00:19:31.778 16:35:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:31.778 16:35:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99487' 00:19:31.778 16:35:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 99487 00:19:31.778 [2024-12-06 16:35:13.429257] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:31.778 [2024-12-06 16:35:13.429332] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:31.778 [2024-12-06 16:35:13.429380] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:31.778 [2024-12-06 16:35:13.429388] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:19:31.778 16:35:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 99487 00:19:31.778 [2024-12-06 16:35:13.452446] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:32.038 16:35:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:19:32.038 00:19:32.038 real 0m4.700s 00:19:32.038 user 0m7.670s 00:19:32.038 sys 0m0.996s 00:19:32.038 16:35:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:32.038 ************************************ 00:19:32.038 END TEST raid_superblock_test_md_interleaved 00:19:32.038 ************************************ 00:19:32.038 16:35:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:32.038 16:35:13 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:19:32.038 16:35:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:32.038 16:35:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:32.038 16:35:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:32.038 ************************************ 00:19:32.038 START TEST raid_rebuild_test_sb_md_interleaved 00:19:32.038 ************************************ 00:19:32.038 16:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:19:32.038 16:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:32.039 16:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:32.039 16:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:32.039 16:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:32.039 16:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:19:32.039 16:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:32.039 16:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:32.039 16:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:32.039 16:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:32.039 16:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:32.039 16:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:32.039 16:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:32.039 16:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:32.039 16:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:32.039 16:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:32.039 16:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:32.039 16:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:32.039 16:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:32.039 16:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:32.039 16:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:32.039 16:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:32.039 16:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:32.039 16:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:32.039 16:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:32.039 16:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=99799 00:19:32.039 16:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 99799 00:19:32.039 16:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:32.039 16:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 99799 ']' 00:19:32.039 16:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:32.039 16:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:32.039 16:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:32.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:32.039 16:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:32.039 16:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:32.039 [2024-12-06 16:35:13.838481] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:19:32.039 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:32.039 Zero copy mechanism will not be used. 00:19:32.039 [2024-12-06 16:35:13.838780] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99799 ] 00:19:32.298 [2024-12-06 16:35:14.027502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.298 [2024-12-06 16:35:14.056244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:32.298 [2024-12-06 16:35:14.099261] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:32.298 [2024-12-06 16:35:14.099296] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:32.864 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:32.864 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:19:32.864 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:32.864 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:19:32.864 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.864 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:32.864 BaseBdev1_malloc 00:19:32.865 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.865 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:32.865 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.865 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:33.123 [2024-12-06 16:35:14.702793] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:33.123 [2024-12-06 16:35:14.702922] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:33.123 [2024-12-06 16:35:14.702976] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:33.123 [2024-12-06 16:35:14.703030] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:33.123 [2024-12-06 16:35:14.705064] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:33.123 [2024-12-06 16:35:14.705136] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:33.123 BaseBdev1 00:19:33.123 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.123 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:33.123 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:19:33.123 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.123 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:33.123 BaseBdev2_malloc 00:19:33.123 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.123 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:33.123 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.123 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:33.123 [2024-12-06 16:35:14.732030] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:33.123 [2024-12-06 16:35:14.732177] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:33.123 [2024-12-06 16:35:14.732230] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:33.123 [2024-12-06 16:35:14.732269] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:33.123 [2024-12-06 16:35:14.734245] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:33.123 [2024-12-06 16:35:14.734309] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:33.123 BaseBdev2 00:19:33.123 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.123 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:19:33.123 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.123 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:33.123 spare_malloc 00:19:33.123 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.123 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:33.123 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.123 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:33.123 spare_delay 00:19:33.123 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.123 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:33.123 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.123 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:33.123 [2024-12-06 16:35:14.782606] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:33.123 [2024-12-06 16:35:14.782704] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:33.123 [2024-12-06 16:35:14.782768] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:33.123 [2024-12-06 16:35:14.782810] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:33.123 [2024-12-06 16:35:14.785023] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:33.123 [2024-12-06 16:35:14.785100] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:33.123 spare 00:19:33.123 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.123 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:33.123 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.123 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:33.123 [2024-12-06 16:35:14.794617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:33.123 [2024-12-06 16:35:14.796614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:33.123 [2024-12-06 16:35:14.796800] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:19:33.123 [2024-12-06 16:35:14.796815] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:33.123 [2024-12-06 16:35:14.796906] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:19:33.123 [2024-12-06 16:35:14.796985] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:19:33.123 [2024-12-06 16:35:14.796997] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:19:33.123 [2024-12-06 16:35:14.797064] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:33.123 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.123 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:33.124 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:33.124 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:33.124 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:33.124 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:33.124 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:33.124 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:33.124 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:33.124 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:33.124 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:33.124 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.124 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.124 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.124 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:33.124 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.124 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:33.124 "name": "raid_bdev1", 00:19:33.124 "uuid": "2f89524d-7544-4573-81b9-04fc705b5eab", 00:19:33.124 "strip_size_kb": 0, 00:19:33.124 "state": "online", 00:19:33.124 "raid_level": "raid1", 00:19:33.124 "superblock": true, 00:19:33.124 "num_base_bdevs": 2, 00:19:33.124 "num_base_bdevs_discovered": 2, 00:19:33.124 "num_base_bdevs_operational": 2, 00:19:33.124 "base_bdevs_list": [ 00:19:33.124 { 00:19:33.124 "name": "BaseBdev1", 00:19:33.124 "uuid": "74468e50-9a0e-5033-b1a3-7259b16f1e78", 00:19:33.124 "is_configured": true, 00:19:33.124 "data_offset": 256, 00:19:33.124 "data_size": 7936 00:19:33.124 }, 00:19:33.124 { 00:19:33.124 "name": "BaseBdev2", 00:19:33.124 "uuid": "0461aedf-86c7-5caf-b88b-d8187bfe2e36", 00:19:33.124 "is_configured": true, 00:19:33.124 "data_offset": 256, 00:19:33.124 "data_size": 7936 00:19:33.124 } 00:19:33.124 ] 00:19:33.124 }' 00:19:33.124 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:33.124 16:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:33.383 16:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:33.383 16:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:33.383 16:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.383 16:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:33.383 [2024-12-06 16:35:15.178304] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:33.383 16:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.383 16:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:19:33.642 16:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.642 16:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:33.642 16:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.642 16:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:33.642 16:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.642 16:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:19:33.642 16:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:33.642 16:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:19:33.642 16:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:33.642 16:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.642 16:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:33.642 [2024-12-06 16:35:15.273775] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:33.642 16:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.642 16:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:33.642 16:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:33.642 16:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:33.642 16:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:33.642 16:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:33.642 16:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:33.642 16:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:33.642 16:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:33.642 16:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:33.642 16:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:33.642 16:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.642 16:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.642 16:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:33.642 16:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.642 16:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.642 16:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:33.642 "name": "raid_bdev1", 00:19:33.642 "uuid": "2f89524d-7544-4573-81b9-04fc705b5eab", 00:19:33.642 "strip_size_kb": 0, 00:19:33.642 "state": "online", 00:19:33.642 "raid_level": "raid1", 00:19:33.642 "superblock": true, 00:19:33.642 "num_base_bdevs": 2, 00:19:33.642 "num_base_bdevs_discovered": 1, 00:19:33.642 "num_base_bdevs_operational": 1, 00:19:33.642 "base_bdevs_list": [ 00:19:33.642 { 00:19:33.642 "name": null, 00:19:33.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.642 "is_configured": false, 00:19:33.642 "data_offset": 0, 00:19:33.642 "data_size": 7936 00:19:33.642 }, 00:19:33.642 { 00:19:33.642 "name": "BaseBdev2", 00:19:33.642 "uuid": "0461aedf-86c7-5caf-b88b-d8187bfe2e36", 00:19:33.642 "is_configured": true, 00:19:33.642 "data_offset": 256, 00:19:33.642 "data_size": 7936 00:19:33.642 } 00:19:33.642 ] 00:19:33.642 }' 00:19:33.642 16:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:33.642 16:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:33.902 16:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:33.902 16:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.902 16:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:33.902 [2024-12-06 16:35:15.701082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:33.902 [2024-12-06 16:35:15.704877] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:33.902 [2024-12-06 16:35:15.706813] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:33.902 16:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.902 16:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:35.285 16:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:35.285 16:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:35.285 16:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:35.285 16:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:35.285 16:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:35.285 16:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.285 16:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.285 16:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.285 16:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:35.285 16:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.285 16:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:35.285 "name": "raid_bdev1", 00:19:35.285 "uuid": "2f89524d-7544-4573-81b9-04fc705b5eab", 00:19:35.285 "strip_size_kb": 0, 00:19:35.285 "state": "online", 00:19:35.285 "raid_level": "raid1", 00:19:35.285 "superblock": true, 00:19:35.285 "num_base_bdevs": 2, 00:19:35.285 "num_base_bdevs_discovered": 2, 00:19:35.285 "num_base_bdevs_operational": 2, 00:19:35.285 "process": { 00:19:35.285 "type": "rebuild", 00:19:35.285 "target": "spare", 00:19:35.285 "progress": { 00:19:35.285 "blocks": 2560, 00:19:35.285 "percent": 32 00:19:35.285 } 00:19:35.285 }, 00:19:35.285 "base_bdevs_list": [ 00:19:35.285 { 00:19:35.285 "name": "spare", 00:19:35.285 "uuid": "b97bdcb5-d49d-58ec-b48d-46ec53c12480", 00:19:35.285 "is_configured": true, 00:19:35.285 "data_offset": 256, 00:19:35.285 "data_size": 7936 00:19:35.285 }, 00:19:35.285 { 00:19:35.285 "name": "BaseBdev2", 00:19:35.285 "uuid": "0461aedf-86c7-5caf-b88b-d8187bfe2e36", 00:19:35.285 "is_configured": true, 00:19:35.285 "data_offset": 256, 00:19:35.285 "data_size": 7936 00:19:35.285 } 00:19:35.285 ] 00:19:35.285 }' 00:19:35.285 16:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:35.285 16:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:35.285 16:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:35.285 16:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:35.285 16:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:35.285 16:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.285 16:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:35.285 [2024-12-06 16:35:16.873693] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:35.285 [2024-12-06 16:35:16.911851] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:35.285 [2024-12-06 16:35:16.911947] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:35.285 [2024-12-06 16:35:16.912000] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:35.285 [2024-12-06 16:35:16.912021] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:35.285 16:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.285 16:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:35.285 16:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:35.285 16:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:35.285 16:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:35.285 16:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:35.285 16:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:35.285 16:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:35.285 16:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:35.285 16:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:35.285 16:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:35.285 16:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.285 16:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.285 16:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.285 16:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:35.285 16:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.285 16:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:35.285 "name": "raid_bdev1", 00:19:35.285 "uuid": "2f89524d-7544-4573-81b9-04fc705b5eab", 00:19:35.285 "strip_size_kb": 0, 00:19:35.285 "state": "online", 00:19:35.285 "raid_level": "raid1", 00:19:35.285 "superblock": true, 00:19:35.285 "num_base_bdevs": 2, 00:19:35.285 "num_base_bdevs_discovered": 1, 00:19:35.285 "num_base_bdevs_operational": 1, 00:19:35.285 "base_bdevs_list": [ 00:19:35.285 { 00:19:35.285 "name": null, 00:19:35.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:35.285 "is_configured": false, 00:19:35.286 "data_offset": 0, 00:19:35.286 "data_size": 7936 00:19:35.286 }, 00:19:35.286 { 00:19:35.286 "name": "BaseBdev2", 00:19:35.286 "uuid": "0461aedf-86c7-5caf-b88b-d8187bfe2e36", 00:19:35.286 "is_configured": true, 00:19:35.286 "data_offset": 256, 00:19:35.286 "data_size": 7936 00:19:35.286 } 00:19:35.286 ] 00:19:35.286 }' 00:19:35.286 16:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:35.286 16:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:35.854 16:35:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:35.855 16:35:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:35.855 16:35:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:35.855 16:35:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:35.855 16:35:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:35.855 16:35:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.855 16:35:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.855 16:35:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.855 16:35:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:35.855 16:35:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.855 16:35:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:35.855 "name": "raid_bdev1", 00:19:35.855 "uuid": "2f89524d-7544-4573-81b9-04fc705b5eab", 00:19:35.855 "strip_size_kb": 0, 00:19:35.855 "state": "online", 00:19:35.855 "raid_level": "raid1", 00:19:35.855 "superblock": true, 00:19:35.855 "num_base_bdevs": 2, 00:19:35.855 "num_base_bdevs_discovered": 1, 00:19:35.855 "num_base_bdevs_operational": 1, 00:19:35.855 "base_bdevs_list": [ 00:19:35.855 { 00:19:35.855 "name": null, 00:19:35.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:35.855 "is_configured": false, 00:19:35.855 "data_offset": 0, 00:19:35.855 "data_size": 7936 00:19:35.855 }, 00:19:35.855 { 00:19:35.855 "name": "BaseBdev2", 00:19:35.855 "uuid": "0461aedf-86c7-5caf-b88b-d8187bfe2e36", 00:19:35.855 "is_configured": true, 00:19:35.855 "data_offset": 256, 00:19:35.855 "data_size": 7936 00:19:35.855 } 00:19:35.855 ] 00:19:35.855 }' 00:19:35.855 16:35:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:35.855 16:35:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:35.855 16:35:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:35.855 16:35:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:35.855 16:35:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:35.855 16:35:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.855 16:35:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:35.855 [2024-12-06 16:35:17.543301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:35.855 [2024-12-06 16:35:17.547054] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:35.855 [2024-12-06 16:35:17.548996] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:35.855 16:35:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.855 16:35:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:36.794 16:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:36.794 16:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:36.794 16:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:36.794 16:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:36.794 16:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:36.794 16:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.794 16:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.794 16:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.794 16:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:36.794 16:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.794 16:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:36.794 "name": "raid_bdev1", 00:19:36.794 "uuid": "2f89524d-7544-4573-81b9-04fc705b5eab", 00:19:36.794 "strip_size_kb": 0, 00:19:36.794 "state": "online", 00:19:36.794 "raid_level": "raid1", 00:19:36.794 "superblock": true, 00:19:36.794 "num_base_bdevs": 2, 00:19:36.794 "num_base_bdevs_discovered": 2, 00:19:36.794 "num_base_bdevs_operational": 2, 00:19:36.794 "process": { 00:19:36.794 "type": "rebuild", 00:19:36.794 "target": "spare", 00:19:36.794 "progress": { 00:19:36.794 "blocks": 2560, 00:19:36.794 "percent": 32 00:19:36.794 } 00:19:36.794 }, 00:19:36.794 "base_bdevs_list": [ 00:19:36.794 { 00:19:36.794 "name": "spare", 00:19:36.794 "uuid": "b97bdcb5-d49d-58ec-b48d-46ec53c12480", 00:19:36.794 "is_configured": true, 00:19:36.794 "data_offset": 256, 00:19:36.794 "data_size": 7936 00:19:36.794 }, 00:19:36.794 { 00:19:36.794 "name": "BaseBdev2", 00:19:36.794 "uuid": "0461aedf-86c7-5caf-b88b-d8187bfe2e36", 00:19:36.794 "is_configured": true, 00:19:36.794 "data_offset": 256, 00:19:36.794 "data_size": 7936 00:19:36.794 } 00:19:36.794 ] 00:19:36.794 }' 00:19:36.794 16:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:37.055 16:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:37.055 16:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:37.055 16:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:37.055 16:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:37.055 16:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:37.055 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:37.055 16:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:37.055 16:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:37.055 16:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:37.055 16:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=628 00:19:37.055 16:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:37.055 16:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:37.055 16:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:37.055 16:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:37.055 16:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:37.055 16:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:37.055 16:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.055 16:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.055 16:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.055 16:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:37.055 16:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.055 16:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:37.055 "name": "raid_bdev1", 00:19:37.055 "uuid": "2f89524d-7544-4573-81b9-04fc705b5eab", 00:19:37.055 "strip_size_kb": 0, 00:19:37.055 "state": "online", 00:19:37.055 "raid_level": "raid1", 00:19:37.055 "superblock": true, 00:19:37.055 "num_base_bdevs": 2, 00:19:37.055 "num_base_bdevs_discovered": 2, 00:19:37.055 "num_base_bdevs_operational": 2, 00:19:37.055 "process": { 00:19:37.055 "type": "rebuild", 00:19:37.055 "target": "spare", 00:19:37.055 "progress": { 00:19:37.055 "blocks": 2816, 00:19:37.055 "percent": 35 00:19:37.055 } 00:19:37.055 }, 00:19:37.055 "base_bdevs_list": [ 00:19:37.055 { 00:19:37.055 "name": "spare", 00:19:37.055 "uuid": "b97bdcb5-d49d-58ec-b48d-46ec53c12480", 00:19:37.055 "is_configured": true, 00:19:37.055 "data_offset": 256, 00:19:37.055 "data_size": 7936 00:19:37.055 }, 00:19:37.055 { 00:19:37.055 "name": "BaseBdev2", 00:19:37.055 "uuid": "0461aedf-86c7-5caf-b88b-d8187bfe2e36", 00:19:37.055 "is_configured": true, 00:19:37.055 "data_offset": 256, 00:19:37.055 "data_size": 7936 00:19:37.055 } 00:19:37.055 ] 00:19:37.055 }' 00:19:37.055 16:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:37.055 16:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:37.055 16:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:37.055 16:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:37.055 16:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:38.438 16:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:38.438 16:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:38.438 16:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:38.438 16:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:38.438 16:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:38.438 16:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:38.438 16:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.438 16:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.438 16:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.438 16:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:38.438 16:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.438 16:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:38.438 "name": "raid_bdev1", 00:19:38.438 "uuid": "2f89524d-7544-4573-81b9-04fc705b5eab", 00:19:38.438 "strip_size_kb": 0, 00:19:38.438 "state": "online", 00:19:38.438 "raid_level": "raid1", 00:19:38.438 "superblock": true, 00:19:38.438 "num_base_bdevs": 2, 00:19:38.438 "num_base_bdevs_discovered": 2, 00:19:38.438 "num_base_bdevs_operational": 2, 00:19:38.438 "process": { 00:19:38.438 "type": "rebuild", 00:19:38.438 "target": "spare", 00:19:38.438 "progress": { 00:19:38.438 "blocks": 5888, 00:19:38.438 "percent": 74 00:19:38.438 } 00:19:38.438 }, 00:19:38.438 "base_bdevs_list": [ 00:19:38.438 { 00:19:38.438 "name": "spare", 00:19:38.438 "uuid": "b97bdcb5-d49d-58ec-b48d-46ec53c12480", 00:19:38.438 "is_configured": true, 00:19:38.438 "data_offset": 256, 00:19:38.438 "data_size": 7936 00:19:38.438 }, 00:19:38.438 { 00:19:38.438 "name": "BaseBdev2", 00:19:38.438 "uuid": "0461aedf-86c7-5caf-b88b-d8187bfe2e36", 00:19:38.438 "is_configured": true, 00:19:38.438 "data_offset": 256, 00:19:38.438 "data_size": 7936 00:19:38.438 } 00:19:38.438 ] 00:19:38.438 }' 00:19:38.438 16:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:38.438 16:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:38.438 16:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:38.438 16:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:38.438 16:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:39.007 [2024-12-06 16:35:20.660702] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:39.007 [2024-12-06 16:35:20.660884] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:39.007 [2024-12-06 16:35:20.661001] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:39.267 16:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:39.267 16:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:39.267 16:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:39.267 16:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:39.267 16:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:39.267 16:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:39.267 16:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.267 16:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.267 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.267 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:39.267 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.267 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:39.267 "name": "raid_bdev1", 00:19:39.267 "uuid": "2f89524d-7544-4573-81b9-04fc705b5eab", 00:19:39.267 "strip_size_kb": 0, 00:19:39.267 "state": "online", 00:19:39.267 "raid_level": "raid1", 00:19:39.267 "superblock": true, 00:19:39.267 "num_base_bdevs": 2, 00:19:39.267 "num_base_bdevs_discovered": 2, 00:19:39.267 "num_base_bdevs_operational": 2, 00:19:39.267 "base_bdevs_list": [ 00:19:39.267 { 00:19:39.267 "name": "spare", 00:19:39.267 "uuid": "b97bdcb5-d49d-58ec-b48d-46ec53c12480", 00:19:39.267 "is_configured": true, 00:19:39.267 "data_offset": 256, 00:19:39.267 "data_size": 7936 00:19:39.267 }, 00:19:39.267 { 00:19:39.267 "name": "BaseBdev2", 00:19:39.267 "uuid": "0461aedf-86c7-5caf-b88b-d8187bfe2e36", 00:19:39.267 "is_configured": true, 00:19:39.267 "data_offset": 256, 00:19:39.267 "data_size": 7936 00:19:39.267 } 00:19:39.267 ] 00:19:39.267 }' 00:19:39.267 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:39.267 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:39.267 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:39.526 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:39.526 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:19:39.526 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:39.526 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:39.526 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:39.526 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:39.526 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:39.526 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.526 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.526 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:39.526 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.526 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.526 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:39.526 "name": "raid_bdev1", 00:19:39.526 "uuid": "2f89524d-7544-4573-81b9-04fc705b5eab", 00:19:39.526 "strip_size_kb": 0, 00:19:39.526 "state": "online", 00:19:39.526 "raid_level": "raid1", 00:19:39.526 "superblock": true, 00:19:39.526 "num_base_bdevs": 2, 00:19:39.526 "num_base_bdevs_discovered": 2, 00:19:39.526 "num_base_bdevs_operational": 2, 00:19:39.526 "base_bdevs_list": [ 00:19:39.526 { 00:19:39.526 "name": "spare", 00:19:39.526 "uuid": "b97bdcb5-d49d-58ec-b48d-46ec53c12480", 00:19:39.526 "is_configured": true, 00:19:39.526 "data_offset": 256, 00:19:39.526 "data_size": 7936 00:19:39.526 }, 00:19:39.526 { 00:19:39.526 "name": "BaseBdev2", 00:19:39.526 "uuid": "0461aedf-86c7-5caf-b88b-d8187bfe2e36", 00:19:39.526 "is_configured": true, 00:19:39.526 "data_offset": 256, 00:19:39.526 "data_size": 7936 00:19:39.526 } 00:19:39.526 ] 00:19:39.526 }' 00:19:39.526 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:39.526 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:39.526 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:39.526 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:39.526 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:39.526 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:39.526 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:39.526 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:39.526 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:39.526 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:39.526 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:39.526 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:39.526 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:39.526 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:39.526 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.526 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.526 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:39.526 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.526 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.526 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:39.526 "name": "raid_bdev1", 00:19:39.526 "uuid": "2f89524d-7544-4573-81b9-04fc705b5eab", 00:19:39.526 "strip_size_kb": 0, 00:19:39.526 "state": "online", 00:19:39.526 "raid_level": "raid1", 00:19:39.526 "superblock": true, 00:19:39.526 "num_base_bdevs": 2, 00:19:39.526 "num_base_bdevs_discovered": 2, 00:19:39.526 "num_base_bdevs_operational": 2, 00:19:39.526 "base_bdevs_list": [ 00:19:39.526 { 00:19:39.526 "name": "spare", 00:19:39.526 "uuid": "b97bdcb5-d49d-58ec-b48d-46ec53c12480", 00:19:39.526 "is_configured": true, 00:19:39.526 "data_offset": 256, 00:19:39.526 "data_size": 7936 00:19:39.526 }, 00:19:39.526 { 00:19:39.526 "name": "BaseBdev2", 00:19:39.526 "uuid": "0461aedf-86c7-5caf-b88b-d8187bfe2e36", 00:19:39.526 "is_configured": true, 00:19:39.526 "data_offset": 256, 00:19:39.526 "data_size": 7936 00:19:39.526 } 00:19:39.526 ] 00:19:39.526 }' 00:19:39.526 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:39.526 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:40.118 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:40.118 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.118 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:40.118 [2024-12-06 16:35:21.735243] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:40.118 [2024-12-06 16:35:21.735308] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:40.118 [2024-12-06 16:35:21.735441] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:40.118 [2024-12-06 16:35:21.735546] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:40.118 [2024-12-06 16:35:21.735608] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:19:40.118 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.118 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.118 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.118 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:40.118 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:19:40.118 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.118 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:40.118 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:19:40.119 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:40.119 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:40.119 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.119 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:40.119 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.119 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:40.119 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.119 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:40.119 [2024-12-06 16:35:21.803075] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:40.119 [2024-12-06 16:35:21.803175] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:40.119 [2024-12-06 16:35:21.803209] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:19:40.119 [2024-12-06 16:35:21.803248] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:40.119 [2024-12-06 16:35:21.805201] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:40.119 [2024-12-06 16:35:21.805283] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:40.119 [2024-12-06 16:35:21.805352] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:40.119 [2024-12-06 16:35:21.805432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:40.119 [2024-12-06 16:35:21.805566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:40.119 spare 00:19:40.119 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.119 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:40.119 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.119 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:40.119 [2024-12-06 16:35:21.905522] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:19:40.119 [2024-12-06 16:35:21.905581] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:40.119 [2024-12-06 16:35:21.905723] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:40.119 [2024-12-06 16:35:21.905840] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:19:40.119 [2024-12-06 16:35:21.905882] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:19:40.119 [2024-12-06 16:35:21.906002] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:40.119 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.119 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:40.119 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:40.119 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:40.119 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:40.119 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:40.119 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:40.119 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:40.119 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:40.119 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:40.119 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:40.119 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.119 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.119 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.119 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:40.119 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.379 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:40.379 "name": "raid_bdev1", 00:19:40.379 "uuid": "2f89524d-7544-4573-81b9-04fc705b5eab", 00:19:40.379 "strip_size_kb": 0, 00:19:40.379 "state": "online", 00:19:40.379 "raid_level": "raid1", 00:19:40.379 "superblock": true, 00:19:40.379 "num_base_bdevs": 2, 00:19:40.379 "num_base_bdevs_discovered": 2, 00:19:40.379 "num_base_bdevs_operational": 2, 00:19:40.379 "base_bdevs_list": [ 00:19:40.379 { 00:19:40.379 "name": "spare", 00:19:40.379 "uuid": "b97bdcb5-d49d-58ec-b48d-46ec53c12480", 00:19:40.379 "is_configured": true, 00:19:40.379 "data_offset": 256, 00:19:40.379 "data_size": 7936 00:19:40.379 }, 00:19:40.379 { 00:19:40.379 "name": "BaseBdev2", 00:19:40.379 "uuid": "0461aedf-86c7-5caf-b88b-d8187bfe2e36", 00:19:40.379 "is_configured": true, 00:19:40.379 "data_offset": 256, 00:19:40.379 "data_size": 7936 00:19:40.379 } 00:19:40.379 ] 00:19:40.379 }' 00:19:40.379 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:40.379 16:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:40.639 16:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:40.639 16:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:40.639 16:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:40.640 16:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:40.640 16:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:40.640 16:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.640 16:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.640 16:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.640 16:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:40.640 16:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.640 16:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:40.640 "name": "raid_bdev1", 00:19:40.640 "uuid": "2f89524d-7544-4573-81b9-04fc705b5eab", 00:19:40.640 "strip_size_kb": 0, 00:19:40.640 "state": "online", 00:19:40.640 "raid_level": "raid1", 00:19:40.640 "superblock": true, 00:19:40.640 "num_base_bdevs": 2, 00:19:40.640 "num_base_bdevs_discovered": 2, 00:19:40.640 "num_base_bdevs_operational": 2, 00:19:40.640 "base_bdevs_list": [ 00:19:40.640 { 00:19:40.640 "name": "spare", 00:19:40.640 "uuid": "b97bdcb5-d49d-58ec-b48d-46ec53c12480", 00:19:40.640 "is_configured": true, 00:19:40.640 "data_offset": 256, 00:19:40.640 "data_size": 7936 00:19:40.640 }, 00:19:40.640 { 00:19:40.640 "name": "BaseBdev2", 00:19:40.640 "uuid": "0461aedf-86c7-5caf-b88b-d8187bfe2e36", 00:19:40.640 "is_configured": true, 00:19:40.640 "data_offset": 256, 00:19:40.640 "data_size": 7936 00:19:40.640 } 00:19:40.640 ] 00:19:40.640 }' 00:19:40.640 16:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:40.640 16:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:40.640 16:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:40.899 16:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:40.899 16:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:40.899 16:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.899 16:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.899 16:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:40.899 16:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.899 16:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:40.899 16:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:40.899 16:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.899 16:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:40.899 [2024-12-06 16:35:22.569840] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:40.899 16:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.899 16:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:40.899 16:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:40.900 16:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:40.900 16:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:40.900 16:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:40.900 16:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:40.900 16:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:40.900 16:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:40.900 16:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:40.900 16:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:40.900 16:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.900 16:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.900 16:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.900 16:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:40.900 16:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.900 16:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:40.900 "name": "raid_bdev1", 00:19:40.900 "uuid": "2f89524d-7544-4573-81b9-04fc705b5eab", 00:19:40.900 "strip_size_kb": 0, 00:19:40.900 "state": "online", 00:19:40.900 "raid_level": "raid1", 00:19:40.900 "superblock": true, 00:19:40.900 "num_base_bdevs": 2, 00:19:40.900 "num_base_bdevs_discovered": 1, 00:19:40.900 "num_base_bdevs_operational": 1, 00:19:40.900 "base_bdevs_list": [ 00:19:40.900 { 00:19:40.900 "name": null, 00:19:40.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.900 "is_configured": false, 00:19:40.900 "data_offset": 0, 00:19:40.900 "data_size": 7936 00:19:40.900 }, 00:19:40.900 { 00:19:40.900 "name": "BaseBdev2", 00:19:40.900 "uuid": "0461aedf-86c7-5caf-b88b-d8187bfe2e36", 00:19:40.900 "is_configured": true, 00:19:40.900 "data_offset": 256, 00:19:40.900 "data_size": 7936 00:19:40.900 } 00:19:40.900 ] 00:19:40.900 }' 00:19:40.900 16:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:40.900 16:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:41.159 16:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:41.159 16:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.159 16:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:41.159 [2024-12-06 16:35:22.989096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:41.160 [2024-12-06 16:35:22.989347] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:41.160 [2024-12-06 16:35:22.989408] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:41.160 [2024-12-06 16:35:22.989470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:41.160 [2024-12-06 16:35:22.993079] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:41.160 16:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.160 16:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:41.160 [2024-12-06 16:35:22.994993] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:42.539 16:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:42.539 16:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:42.539 16:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:42.539 16:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:42.539 16:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:42.539 16:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.539 16:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.539 16:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:42.539 16:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:42.539 16:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.539 16:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:42.539 "name": "raid_bdev1", 00:19:42.539 "uuid": "2f89524d-7544-4573-81b9-04fc705b5eab", 00:19:42.539 "strip_size_kb": 0, 00:19:42.539 "state": "online", 00:19:42.539 "raid_level": "raid1", 00:19:42.539 "superblock": true, 00:19:42.539 "num_base_bdevs": 2, 00:19:42.539 "num_base_bdevs_discovered": 2, 00:19:42.539 "num_base_bdevs_operational": 2, 00:19:42.539 "process": { 00:19:42.539 "type": "rebuild", 00:19:42.539 "target": "spare", 00:19:42.539 "progress": { 00:19:42.539 "blocks": 2560, 00:19:42.539 "percent": 32 00:19:42.539 } 00:19:42.539 }, 00:19:42.539 "base_bdevs_list": [ 00:19:42.539 { 00:19:42.539 "name": "spare", 00:19:42.539 "uuid": "b97bdcb5-d49d-58ec-b48d-46ec53c12480", 00:19:42.539 "is_configured": true, 00:19:42.539 "data_offset": 256, 00:19:42.539 "data_size": 7936 00:19:42.539 }, 00:19:42.539 { 00:19:42.539 "name": "BaseBdev2", 00:19:42.539 "uuid": "0461aedf-86c7-5caf-b88b-d8187bfe2e36", 00:19:42.539 "is_configured": true, 00:19:42.539 "data_offset": 256, 00:19:42.539 "data_size": 7936 00:19:42.539 } 00:19:42.539 ] 00:19:42.539 }' 00:19:42.539 16:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:42.539 16:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:42.539 16:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:42.539 16:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:42.539 16:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:42.539 16:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.539 16:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:42.539 [2024-12-06 16:35:24.143582] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:42.539 [2024-12-06 16:35:24.199154] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:42.539 [2024-12-06 16:35:24.199266] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:42.539 [2024-12-06 16:35:24.199304] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:42.539 [2024-12-06 16:35:24.199325] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:42.539 16:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.539 16:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:42.539 16:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:42.539 16:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:42.539 16:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:42.539 16:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:42.539 16:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:42.539 16:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:42.539 16:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:42.539 16:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:42.539 16:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:42.539 16:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.539 16:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:42.539 16:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.539 16:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:42.539 16:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.539 16:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:42.539 "name": "raid_bdev1", 00:19:42.539 "uuid": "2f89524d-7544-4573-81b9-04fc705b5eab", 00:19:42.539 "strip_size_kb": 0, 00:19:42.539 "state": "online", 00:19:42.539 "raid_level": "raid1", 00:19:42.539 "superblock": true, 00:19:42.539 "num_base_bdevs": 2, 00:19:42.539 "num_base_bdevs_discovered": 1, 00:19:42.539 "num_base_bdevs_operational": 1, 00:19:42.539 "base_bdevs_list": [ 00:19:42.539 { 00:19:42.539 "name": null, 00:19:42.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.539 "is_configured": false, 00:19:42.539 "data_offset": 0, 00:19:42.539 "data_size": 7936 00:19:42.539 }, 00:19:42.539 { 00:19:42.539 "name": "BaseBdev2", 00:19:42.539 "uuid": "0461aedf-86c7-5caf-b88b-d8187bfe2e36", 00:19:42.539 "is_configured": true, 00:19:42.539 "data_offset": 256, 00:19:42.539 "data_size": 7936 00:19:42.539 } 00:19:42.539 ] 00:19:42.539 }' 00:19:42.539 16:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:42.539 16:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:42.798 16:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:42.798 16:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.798 16:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:42.798 [2024-12-06 16:35:24.634537] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:42.798 [2024-12-06 16:35:24.634648] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:42.798 [2024-12-06 16:35:24.634695] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:42.798 [2024-12-06 16:35:24.634723] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:42.798 [2024-12-06 16:35:24.634966] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:42.798 [2024-12-06 16:35:24.635019] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:42.798 [2024-12-06 16:35:24.635117] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:42.798 [2024-12-06 16:35:24.635157] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:42.798 [2024-12-06 16:35:24.635227] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:42.798 [2024-12-06 16:35:24.635287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:43.056 [2024-12-06 16:35:24.638993] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:43.056 spare 00:19:43.056 16:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.056 16:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:43.056 [2024-12-06 16:35:24.641074] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:43.989 16:35:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:43.989 16:35:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:43.989 16:35:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:43.989 16:35:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:43.989 16:35:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:43.989 16:35:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.989 16:35:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.989 16:35:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.989 16:35:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:43.989 16:35:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.989 16:35:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:43.989 "name": "raid_bdev1", 00:19:43.989 "uuid": "2f89524d-7544-4573-81b9-04fc705b5eab", 00:19:43.989 "strip_size_kb": 0, 00:19:43.989 "state": "online", 00:19:43.989 "raid_level": "raid1", 00:19:43.989 "superblock": true, 00:19:43.989 "num_base_bdevs": 2, 00:19:43.989 "num_base_bdevs_discovered": 2, 00:19:43.989 "num_base_bdevs_operational": 2, 00:19:43.989 "process": { 00:19:43.989 "type": "rebuild", 00:19:43.989 "target": "spare", 00:19:43.989 "progress": { 00:19:43.989 "blocks": 2560, 00:19:43.989 "percent": 32 00:19:43.989 } 00:19:43.989 }, 00:19:43.989 "base_bdevs_list": [ 00:19:43.989 { 00:19:43.989 "name": "spare", 00:19:43.989 "uuid": "b97bdcb5-d49d-58ec-b48d-46ec53c12480", 00:19:43.989 "is_configured": true, 00:19:43.989 "data_offset": 256, 00:19:43.989 "data_size": 7936 00:19:43.989 }, 00:19:43.989 { 00:19:43.989 "name": "BaseBdev2", 00:19:43.989 "uuid": "0461aedf-86c7-5caf-b88b-d8187bfe2e36", 00:19:43.989 "is_configured": true, 00:19:43.989 "data_offset": 256, 00:19:43.989 "data_size": 7936 00:19:43.989 } 00:19:43.989 ] 00:19:43.989 }' 00:19:43.989 16:35:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:43.989 16:35:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:43.989 16:35:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:43.989 16:35:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:43.989 16:35:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:43.989 16:35:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.989 16:35:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:43.989 [2024-12-06 16:35:25.789569] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:44.247 [2024-12-06 16:35:25.845408] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:44.247 [2024-12-06 16:35:25.845557] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:44.247 [2024-12-06 16:35:25.845598] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:44.247 [2024-12-06 16:35:25.845625] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:44.247 16:35:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.247 16:35:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:44.247 16:35:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:44.247 16:35:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:44.247 16:35:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:44.247 16:35:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:44.247 16:35:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:44.247 16:35:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:44.247 16:35:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:44.247 16:35:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:44.247 16:35:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:44.247 16:35:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.247 16:35:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:44.247 16:35:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.247 16:35:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:44.247 16:35:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.247 16:35:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:44.247 "name": "raid_bdev1", 00:19:44.248 "uuid": "2f89524d-7544-4573-81b9-04fc705b5eab", 00:19:44.248 "strip_size_kb": 0, 00:19:44.248 "state": "online", 00:19:44.248 "raid_level": "raid1", 00:19:44.248 "superblock": true, 00:19:44.248 "num_base_bdevs": 2, 00:19:44.248 "num_base_bdevs_discovered": 1, 00:19:44.248 "num_base_bdevs_operational": 1, 00:19:44.248 "base_bdevs_list": [ 00:19:44.248 { 00:19:44.248 "name": null, 00:19:44.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.248 "is_configured": false, 00:19:44.248 "data_offset": 0, 00:19:44.248 "data_size": 7936 00:19:44.248 }, 00:19:44.248 { 00:19:44.248 "name": "BaseBdev2", 00:19:44.248 "uuid": "0461aedf-86c7-5caf-b88b-d8187bfe2e36", 00:19:44.248 "is_configured": true, 00:19:44.248 "data_offset": 256, 00:19:44.248 "data_size": 7936 00:19:44.248 } 00:19:44.248 ] 00:19:44.248 }' 00:19:44.248 16:35:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:44.248 16:35:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:44.505 16:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:44.505 16:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:44.505 16:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:44.505 16:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:44.505 16:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:44.505 16:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.505 16:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.505 16:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:44.505 16:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:44.505 16:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.763 16:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:44.763 "name": "raid_bdev1", 00:19:44.763 "uuid": "2f89524d-7544-4573-81b9-04fc705b5eab", 00:19:44.763 "strip_size_kb": 0, 00:19:44.763 "state": "online", 00:19:44.763 "raid_level": "raid1", 00:19:44.763 "superblock": true, 00:19:44.763 "num_base_bdevs": 2, 00:19:44.763 "num_base_bdevs_discovered": 1, 00:19:44.763 "num_base_bdevs_operational": 1, 00:19:44.763 "base_bdevs_list": [ 00:19:44.763 { 00:19:44.763 "name": null, 00:19:44.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.763 "is_configured": false, 00:19:44.763 "data_offset": 0, 00:19:44.763 "data_size": 7936 00:19:44.763 }, 00:19:44.763 { 00:19:44.763 "name": "BaseBdev2", 00:19:44.763 "uuid": "0461aedf-86c7-5caf-b88b-d8187bfe2e36", 00:19:44.763 "is_configured": true, 00:19:44.763 "data_offset": 256, 00:19:44.763 "data_size": 7936 00:19:44.763 } 00:19:44.763 ] 00:19:44.763 }' 00:19:44.763 16:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:44.763 16:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:44.763 16:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:44.763 16:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:44.763 16:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:44.763 16:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.764 16:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:44.764 16:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.764 16:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:44.764 16:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.764 16:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:44.764 [2024-12-06 16:35:26.448801] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:44.764 [2024-12-06 16:35:26.448878] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:44.764 [2024-12-06 16:35:26.448902] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:44.764 [2024-12-06 16:35:26.448913] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:44.764 [2024-12-06 16:35:26.449086] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:44.764 [2024-12-06 16:35:26.449103] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:44.764 [2024-12-06 16:35:26.449155] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:44.764 [2024-12-06 16:35:26.449170] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:44.764 [2024-12-06 16:35:26.449178] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:44.764 [2024-12-06 16:35:26.449194] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:44.764 BaseBdev1 00:19:44.764 16:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.764 16:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:45.696 16:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:45.696 16:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:45.696 16:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:45.696 16:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:45.696 16:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:45.696 16:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:45.696 16:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:45.696 16:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:45.696 16:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:45.696 16:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:45.696 16:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:45.697 16:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:45.697 16:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.697 16:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:45.697 16:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.697 16:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:45.697 "name": "raid_bdev1", 00:19:45.697 "uuid": "2f89524d-7544-4573-81b9-04fc705b5eab", 00:19:45.697 "strip_size_kb": 0, 00:19:45.697 "state": "online", 00:19:45.697 "raid_level": "raid1", 00:19:45.697 "superblock": true, 00:19:45.697 "num_base_bdevs": 2, 00:19:45.697 "num_base_bdevs_discovered": 1, 00:19:45.697 "num_base_bdevs_operational": 1, 00:19:45.697 "base_bdevs_list": [ 00:19:45.697 { 00:19:45.697 "name": null, 00:19:45.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.697 "is_configured": false, 00:19:45.697 "data_offset": 0, 00:19:45.697 "data_size": 7936 00:19:45.697 }, 00:19:45.697 { 00:19:45.697 "name": "BaseBdev2", 00:19:45.697 "uuid": "0461aedf-86c7-5caf-b88b-d8187bfe2e36", 00:19:45.697 "is_configured": true, 00:19:45.697 "data_offset": 256, 00:19:45.697 "data_size": 7936 00:19:45.697 } 00:19:45.697 ] 00:19:45.697 }' 00:19:45.697 16:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:45.697 16:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:46.261 16:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:46.261 16:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:46.261 16:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:46.261 16:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:46.261 16:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:46.261 16:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.261 16:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.261 16:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.261 16:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:46.261 16:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.261 16:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:46.261 "name": "raid_bdev1", 00:19:46.261 "uuid": "2f89524d-7544-4573-81b9-04fc705b5eab", 00:19:46.261 "strip_size_kb": 0, 00:19:46.261 "state": "online", 00:19:46.261 "raid_level": "raid1", 00:19:46.261 "superblock": true, 00:19:46.261 "num_base_bdevs": 2, 00:19:46.261 "num_base_bdevs_discovered": 1, 00:19:46.261 "num_base_bdevs_operational": 1, 00:19:46.261 "base_bdevs_list": [ 00:19:46.261 { 00:19:46.261 "name": null, 00:19:46.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.261 "is_configured": false, 00:19:46.261 "data_offset": 0, 00:19:46.261 "data_size": 7936 00:19:46.261 }, 00:19:46.261 { 00:19:46.261 "name": "BaseBdev2", 00:19:46.261 "uuid": "0461aedf-86c7-5caf-b88b-d8187bfe2e36", 00:19:46.261 "is_configured": true, 00:19:46.261 "data_offset": 256, 00:19:46.261 "data_size": 7936 00:19:46.261 } 00:19:46.261 ] 00:19:46.261 }' 00:19:46.261 16:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:46.261 16:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:46.261 16:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:46.261 16:35:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:46.261 16:35:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:46.261 16:35:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:19:46.261 16:35:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:46.261 16:35:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:46.261 16:35:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:46.261 16:35:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:46.261 16:35:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:46.261 16:35:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:46.261 16:35:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.261 16:35:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:46.261 [2024-12-06 16:35:28.022250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:46.261 [2024-12-06 16:35:28.022487] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:46.261 [2024-12-06 16:35:28.022564] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:46.261 request: 00:19:46.261 { 00:19:46.261 "base_bdev": "BaseBdev1", 00:19:46.261 "raid_bdev": "raid_bdev1", 00:19:46.261 "method": "bdev_raid_add_base_bdev", 00:19:46.261 "req_id": 1 00:19:46.261 } 00:19:46.261 Got JSON-RPC error response 00:19:46.261 response: 00:19:46.261 { 00:19:46.261 "code": -22, 00:19:46.261 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:46.261 } 00:19:46.261 16:35:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:46.261 16:35:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:19:46.261 16:35:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:46.261 16:35:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:46.261 16:35:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:46.261 16:35:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:47.199 16:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:47.199 16:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:47.199 16:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:47.199 16:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:47.199 16:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:47.199 16:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:47.199 16:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:47.199 16:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:47.199 16:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:47.199 16:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:47.459 16:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.459 16:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.459 16:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:47.459 16:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.459 16:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.459 16:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:47.459 "name": "raid_bdev1", 00:19:47.459 "uuid": "2f89524d-7544-4573-81b9-04fc705b5eab", 00:19:47.459 "strip_size_kb": 0, 00:19:47.459 "state": "online", 00:19:47.459 "raid_level": "raid1", 00:19:47.459 "superblock": true, 00:19:47.459 "num_base_bdevs": 2, 00:19:47.459 "num_base_bdevs_discovered": 1, 00:19:47.459 "num_base_bdevs_operational": 1, 00:19:47.459 "base_bdevs_list": [ 00:19:47.459 { 00:19:47.459 "name": null, 00:19:47.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.459 "is_configured": false, 00:19:47.459 "data_offset": 0, 00:19:47.459 "data_size": 7936 00:19:47.459 }, 00:19:47.459 { 00:19:47.459 "name": "BaseBdev2", 00:19:47.459 "uuid": "0461aedf-86c7-5caf-b88b-d8187bfe2e36", 00:19:47.459 "is_configured": true, 00:19:47.459 "data_offset": 256, 00:19:47.459 "data_size": 7936 00:19:47.459 } 00:19:47.459 ] 00:19:47.459 }' 00:19:47.459 16:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:47.459 16:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:47.719 16:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:47.719 16:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:47.719 16:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:47.719 16:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:47.719 16:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:47.719 16:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.719 16:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.719 16:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.719 16:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:47.719 16:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.719 16:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:47.719 "name": "raid_bdev1", 00:19:47.719 "uuid": "2f89524d-7544-4573-81b9-04fc705b5eab", 00:19:47.719 "strip_size_kb": 0, 00:19:47.719 "state": "online", 00:19:47.719 "raid_level": "raid1", 00:19:47.719 "superblock": true, 00:19:47.719 "num_base_bdevs": 2, 00:19:47.719 "num_base_bdevs_discovered": 1, 00:19:47.719 "num_base_bdevs_operational": 1, 00:19:47.719 "base_bdevs_list": [ 00:19:47.719 { 00:19:47.719 "name": null, 00:19:47.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.719 "is_configured": false, 00:19:47.719 "data_offset": 0, 00:19:47.719 "data_size": 7936 00:19:47.719 }, 00:19:47.719 { 00:19:47.719 "name": "BaseBdev2", 00:19:47.719 "uuid": "0461aedf-86c7-5caf-b88b-d8187bfe2e36", 00:19:47.719 "is_configured": true, 00:19:47.719 "data_offset": 256, 00:19:47.719 "data_size": 7936 00:19:47.719 } 00:19:47.719 ] 00:19:47.719 }' 00:19:47.719 16:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:47.719 16:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:47.719 16:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:47.980 16:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:47.980 16:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 99799 00:19:47.980 16:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 99799 ']' 00:19:47.980 16:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 99799 00:19:47.980 16:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:19:47.980 16:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:47.980 16:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99799 00:19:47.980 16:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:47.980 killing process with pid 99799 00:19:47.980 Received shutdown signal, test time was about 60.000000 seconds 00:19:47.980 00:19:47.980 Latency(us) 00:19:47.980 [2024-12-06T16:35:29.819Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.980 [2024-12-06T16:35:29.819Z] =================================================================================================================== 00:19:47.980 [2024-12-06T16:35:29.819Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:47.980 16:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:47.980 16:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99799' 00:19:47.980 16:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 99799 00:19:47.980 [2024-12-06 16:35:29.615058] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:47.980 [2024-12-06 16:35:29.615176] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:47.980 [2024-12-06 16:35:29.615239] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:47.980 [2024-12-06 16:35:29.615249] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:19:47.980 16:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 99799 00:19:47.980 [2024-12-06 16:35:29.647527] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:48.240 16:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:19:48.240 00:19:48.240 real 0m16.115s 00:19:48.240 user 0m21.564s 00:19:48.240 sys 0m1.626s 00:19:48.240 16:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:48.240 ************************************ 00:19:48.240 END TEST raid_rebuild_test_sb_md_interleaved 00:19:48.240 ************************************ 00:19:48.240 16:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:48.240 16:35:29 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:19:48.240 16:35:29 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:19:48.240 16:35:29 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 99799 ']' 00:19:48.240 16:35:29 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 99799 00:19:48.240 16:35:29 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:19:48.240 00:19:48.240 real 10m9.800s 00:19:48.240 user 14m29.995s 00:19:48.240 sys 1m50.303s 00:19:48.240 16:35:29 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:48.240 16:35:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:48.240 ************************************ 00:19:48.240 END TEST bdev_raid 00:19:48.240 ************************************ 00:19:48.240 16:35:29 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:48.240 16:35:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:48.240 16:35:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:48.240 16:35:29 -- common/autotest_common.sh@10 -- # set +x 00:19:48.240 ************************************ 00:19:48.240 START TEST spdkcli_raid 00:19:48.240 ************************************ 00:19:48.240 16:35:29 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:48.240 * Looking for test storage... 00:19:48.500 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:48.500 16:35:30 spdkcli_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:48.500 16:35:30 spdkcli_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:19:48.500 16:35:30 spdkcli_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:48.500 16:35:30 spdkcli_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:48.500 16:35:30 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:48.500 16:35:30 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:48.500 16:35:30 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:48.500 16:35:30 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:19:48.500 16:35:30 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:19:48.500 16:35:30 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:19:48.500 16:35:30 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:19:48.500 16:35:30 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:19:48.500 16:35:30 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:19:48.500 16:35:30 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:19:48.500 16:35:30 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:48.500 16:35:30 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:19:48.500 16:35:30 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:19:48.500 16:35:30 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:48.500 16:35:30 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:48.500 16:35:30 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:19:48.500 16:35:30 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:19:48.500 16:35:30 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:48.500 16:35:30 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:19:48.500 16:35:30 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:19:48.500 16:35:30 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:19:48.500 16:35:30 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:19:48.500 16:35:30 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:48.500 16:35:30 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:19:48.500 16:35:30 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:19:48.500 16:35:30 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:48.500 16:35:30 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:48.500 16:35:30 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:19:48.500 16:35:30 spdkcli_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:48.500 16:35:30 spdkcli_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:48.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.500 --rc genhtml_branch_coverage=1 00:19:48.500 --rc genhtml_function_coverage=1 00:19:48.500 --rc genhtml_legend=1 00:19:48.500 --rc geninfo_all_blocks=1 00:19:48.500 --rc geninfo_unexecuted_blocks=1 00:19:48.500 00:19:48.500 ' 00:19:48.500 16:35:30 spdkcli_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:48.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.501 --rc genhtml_branch_coverage=1 00:19:48.501 --rc genhtml_function_coverage=1 00:19:48.501 --rc genhtml_legend=1 00:19:48.501 --rc geninfo_all_blocks=1 00:19:48.501 --rc geninfo_unexecuted_blocks=1 00:19:48.501 00:19:48.501 ' 00:19:48.501 16:35:30 spdkcli_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:48.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.501 --rc genhtml_branch_coverage=1 00:19:48.501 --rc genhtml_function_coverage=1 00:19:48.501 --rc genhtml_legend=1 00:19:48.501 --rc geninfo_all_blocks=1 00:19:48.501 --rc geninfo_unexecuted_blocks=1 00:19:48.501 00:19:48.501 ' 00:19:48.501 16:35:30 spdkcli_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:48.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.501 --rc genhtml_branch_coverage=1 00:19:48.501 --rc genhtml_function_coverage=1 00:19:48.501 --rc genhtml_legend=1 00:19:48.501 --rc geninfo_all_blocks=1 00:19:48.501 --rc geninfo_unexecuted_blocks=1 00:19:48.501 00:19:48.501 ' 00:19:48.501 16:35:30 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:19:48.501 16:35:30 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:19:48.501 16:35:30 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:19:48.501 16:35:30 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:19:48.501 16:35:30 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:19:48.501 16:35:30 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:19:48.501 16:35:30 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:19:48.501 16:35:30 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:19:48.501 16:35:30 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:19:48.501 16:35:30 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:19:48.501 16:35:30 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:19:48.501 16:35:30 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:19:48.501 16:35:30 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:19:48.501 16:35:30 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:19:48.501 16:35:30 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:19:48.501 16:35:30 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:19:48.501 16:35:30 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:19:48.501 16:35:30 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:19:48.501 16:35:30 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:19:48.501 16:35:30 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:19:48.501 16:35:30 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:19:48.501 16:35:30 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:19:48.501 16:35:30 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:19:48.501 16:35:30 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:19:48.501 16:35:30 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:19:48.501 16:35:30 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:48.501 16:35:30 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:48.501 16:35:30 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:48.501 16:35:30 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:19:48.501 16:35:30 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:19:48.501 16:35:30 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:19:48.501 16:35:30 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:19:48.501 16:35:30 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:19:48.501 16:35:30 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:48.501 16:35:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:48.501 16:35:30 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:19:48.501 16:35:30 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=100464 00:19:48.501 16:35:30 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:19:48.501 16:35:30 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 100464 00:19:48.501 16:35:30 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 100464 ']' 00:19:48.501 16:35:30 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.501 16:35:30 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:48.501 16:35:30 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.501 16:35:30 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:48.501 16:35:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:48.501 [2024-12-06 16:35:30.322055] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:19:48.501 [2024-12-06 16:35:30.322177] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100464 ] 00:19:48.762 [2024-12-06 16:35:30.476386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:48.762 [2024-12-06 16:35:30.503135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:48.762 [2024-12-06 16:35:30.503293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:49.344 16:35:31 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:49.344 16:35:31 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:19:49.344 16:35:31 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:19:49.344 16:35:31 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:49.344 16:35:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:49.614 16:35:31 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:19:49.614 16:35:31 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:49.614 16:35:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:49.614 16:35:31 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:19:49.614 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:19:49.614 ' 00:19:51.010 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:19:51.010 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:19:51.268 16:35:32 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:19:51.268 16:35:32 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:51.268 16:35:32 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:51.268 16:35:32 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:19:51.268 16:35:32 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:51.268 16:35:32 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:51.268 16:35:32 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:19:51.268 ' 00:19:52.205 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:19:52.464 16:35:34 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:19:52.464 16:35:34 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:52.464 16:35:34 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:52.464 16:35:34 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:19:52.464 16:35:34 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:52.464 16:35:34 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:52.464 16:35:34 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:19:52.464 16:35:34 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:19:53.031 16:35:34 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:19:53.031 16:35:34 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:19:53.031 16:35:34 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:19:53.031 16:35:34 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:53.031 16:35:34 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:53.031 16:35:34 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:19:53.031 16:35:34 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:53.031 16:35:34 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:53.031 16:35:34 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:19:53.031 ' 00:19:53.969 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:19:53.969 16:35:35 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:19:53.969 16:35:35 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:53.969 16:35:35 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:54.229 16:35:35 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:19:54.229 16:35:35 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:54.229 16:35:35 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:54.229 16:35:35 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:19:54.229 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:19:54.229 ' 00:19:55.624 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:19:55.624 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:19:55.624 16:35:37 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:19:55.624 16:35:37 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:55.624 16:35:37 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:55.624 16:35:37 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 100464 00:19:55.624 16:35:37 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 100464 ']' 00:19:55.624 16:35:37 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 100464 00:19:55.624 16:35:37 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:19:55.624 16:35:37 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:55.624 16:35:37 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100464 00:19:55.624 killing process with pid 100464 00:19:55.624 16:35:37 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:55.624 16:35:37 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:55.624 16:35:37 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100464' 00:19:55.624 16:35:37 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 100464 00:19:55.624 16:35:37 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 100464 00:19:55.885 Process with pid 100464 is not found 00:19:55.885 16:35:37 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:19:55.885 16:35:37 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 100464 ']' 00:19:55.885 16:35:37 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 100464 00:19:55.885 16:35:37 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 100464 ']' 00:19:55.885 16:35:37 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 100464 00:19:55.885 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (100464) - No such process 00:19:55.885 16:35:37 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 100464 is not found' 00:19:55.885 16:35:37 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:19:55.885 16:35:37 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:19:55.885 16:35:37 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:19:55.885 16:35:37 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:19:55.885 ************************************ 00:19:55.885 END TEST spdkcli_raid 00:19:55.885 ************************************ 00:19:55.885 00:19:55.885 real 0m7.701s 00:19:55.885 user 0m16.447s 00:19:55.885 sys 0m0.976s 00:19:55.885 16:35:37 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:55.885 16:35:37 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:56.145 16:35:37 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:19:56.145 16:35:37 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:56.145 16:35:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:56.145 16:35:37 -- common/autotest_common.sh@10 -- # set +x 00:19:56.145 ************************************ 00:19:56.145 START TEST blockdev_raid5f 00:19:56.145 ************************************ 00:19:56.145 16:35:37 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:19:56.145 * Looking for test storage... 00:19:56.145 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:19:56.145 16:35:37 blockdev_raid5f -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:56.145 16:35:37 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lcov --version 00:19:56.145 16:35:37 blockdev_raid5f -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:56.145 16:35:37 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:56.145 16:35:37 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:56.146 16:35:37 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:56.146 16:35:37 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:56.146 16:35:37 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:19:56.146 16:35:37 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:19:56.146 16:35:37 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:19:56.146 16:35:37 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:19:56.146 16:35:37 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:19:56.146 16:35:37 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:19:56.146 16:35:37 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:19:56.146 16:35:37 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:56.146 16:35:37 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:19:56.146 16:35:37 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:19:56.146 16:35:37 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:56.146 16:35:37 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:56.146 16:35:37 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:19:56.146 16:35:37 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:19:56.146 16:35:37 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:56.146 16:35:37 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:19:56.146 16:35:37 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:19:56.146 16:35:37 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:19:56.146 16:35:37 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:19:56.146 16:35:37 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:56.146 16:35:37 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:19:56.146 16:35:37 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:19:56.146 16:35:37 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:56.146 16:35:37 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:56.146 16:35:37 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:19:56.146 16:35:37 blockdev_raid5f -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:56.146 16:35:37 blockdev_raid5f -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:56.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.146 --rc genhtml_branch_coverage=1 00:19:56.146 --rc genhtml_function_coverage=1 00:19:56.146 --rc genhtml_legend=1 00:19:56.146 --rc geninfo_all_blocks=1 00:19:56.146 --rc geninfo_unexecuted_blocks=1 00:19:56.146 00:19:56.146 ' 00:19:56.146 16:35:37 blockdev_raid5f -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:56.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.146 --rc genhtml_branch_coverage=1 00:19:56.146 --rc genhtml_function_coverage=1 00:19:56.146 --rc genhtml_legend=1 00:19:56.146 --rc geninfo_all_blocks=1 00:19:56.146 --rc geninfo_unexecuted_blocks=1 00:19:56.146 00:19:56.146 ' 00:19:56.146 16:35:37 blockdev_raid5f -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:56.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.146 --rc genhtml_branch_coverage=1 00:19:56.146 --rc genhtml_function_coverage=1 00:19:56.146 --rc genhtml_legend=1 00:19:56.146 --rc geninfo_all_blocks=1 00:19:56.146 --rc geninfo_unexecuted_blocks=1 00:19:56.146 00:19:56.146 ' 00:19:56.146 16:35:37 blockdev_raid5f -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:56.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.146 --rc genhtml_branch_coverage=1 00:19:56.146 --rc genhtml_function_coverage=1 00:19:56.146 --rc genhtml_legend=1 00:19:56.146 --rc geninfo_all_blocks=1 00:19:56.146 --rc geninfo_unexecuted_blocks=1 00:19:56.146 00:19:56.146 ' 00:19:56.146 16:35:37 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:19:56.146 16:35:37 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:19:56.146 16:35:37 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:19:56.146 16:35:37 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:56.146 16:35:37 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:19:56.146 16:35:37 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:19:56.146 16:35:37 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:19:56.146 16:35:37 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:19:56.146 16:35:37 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:19:56.146 16:35:37 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:19:56.146 16:35:37 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:19:56.146 16:35:37 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:19:56.146 16:35:37 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:19:56.146 16:35:37 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:19:56.146 16:35:37 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:19:56.146 16:35:37 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:19:56.146 16:35:37 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:19:56.146 16:35:37 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:19:56.146 16:35:37 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:19:56.146 16:35:37 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:19:56.146 16:35:37 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:19:56.146 16:35:37 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:19:56.146 16:35:37 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:19:56.146 16:35:37 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:19:56.146 16:35:37 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=100722 00:19:56.146 16:35:37 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:19:56.146 16:35:37 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:19:56.146 16:35:37 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 100722 00:19:56.146 16:35:37 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 100722 ']' 00:19:56.146 16:35:37 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:56.146 16:35:37 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:56.146 16:35:37 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:56.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:56.146 16:35:37 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:56.146 16:35:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:56.406 [2024-12-06 16:35:38.064384] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:19:56.406 [2024-12-06 16:35:38.064618] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100722 ] 00:19:56.406 [2024-12-06 16:35:38.228655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.665 [2024-12-06 16:35:38.254476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.234 16:35:38 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:57.234 16:35:38 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:19:57.234 16:35:38 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:19:57.234 16:35:38 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:19:57.234 16:35:38 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:19:57.234 16:35:38 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.234 16:35:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:57.234 Malloc0 00:19:57.234 Malloc1 00:19:57.234 Malloc2 00:19:57.234 16:35:38 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.234 16:35:38 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:19:57.234 16:35:38 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.234 16:35:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:57.234 16:35:38 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.234 16:35:38 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:19:57.234 16:35:38 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:19:57.234 16:35:38 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.234 16:35:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:57.234 16:35:38 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.234 16:35:38 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:19:57.234 16:35:38 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.234 16:35:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:57.234 16:35:38 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.234 16:35:38 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:19:57.234 16:35:38 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.234 16:35:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:57.234 16:35:38 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.234 16:35:38 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:19:57.234 16:35:38 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:19:57.234 16:35:38 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:19:57.234 16:35:39 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.234 16:35:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:57.234 16:35:39 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.234 16:35:39 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:19:57.234 16:35:39 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:19:57.234 16:35:39 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "fbe08de6-1ce0-4113-9f63-b2a9f9b8792a"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "fbe08de6-1ce0-4113-9f63-b2a9f9b8792a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "fbe08de6-1ce0-4113-9f63-b2a9f9b8792a",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "225e4cab-30fd-4557-a78c-9267caf043a7",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "7d4dbc90-88af-4a59-9348-529a1bc4d1eb",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "2b81db53-7a63-4973-85d1-aa9ba59ca02c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:57.494 16:35:39 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:19:57.494 16:35:39 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:19:57.494 16:35:39 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:19:57.494 16:35:39 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 100722 00:19:57.494 16:35:39 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 100722 ']' 00:19:57.494 16:35:39 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 100722 00:19:57.494 16:35:39 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:19:57.494 16:35:39 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:57.494 16:35:39 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100722 00:19:57.494 16:35:39 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:57.494 killing process with pid 100722 00:19:57.494 16:35:39 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:57.495 16:35:39 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100722' 00:19:57.495 16:35:39 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 100722 00:19:57.495 16:35:39 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 100722 00:19:57.754 16:35:39 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:57.754 16:35:39 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:57.754 16:35:39 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:57.754 16:35:39 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:57.754 16:35:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:57.754 ************************************ 00:19:57.754 START TEST bdev_hello_world 00:19:57.754 ************************************ 00:19:57.754 16:35:39 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:58.014 [2024-12-06 16:35:39.613774] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:19:58.014 [2024-12-06 16:35:39.613916] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100756 ] 00:19:58.014 [2024-12-06 16:35:39.785515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.014 [2024-12-06 16:35:39.810518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.279 [2024-12-06 16:35:39.988632] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:19:58.279 [2024-12-06 16:35:39.988683] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:19:58.279 [2024-12-06 16:35:39.988711] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:19:58.279 [2024-12-06 16:35:39.989036] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:19:58.279 [2024-12-06 16:35:39.989176] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:19:58.279 [2024-12-06 16:35:39.989196] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:19:58.279 [2024-12-06 16:35:39.989262] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:19:58.279 00:19:58.279 [2024-12-06 16:35:39.989280] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:58.545 00:19:58.545 real 0m0.685s 00:19:58.545 user 0m0.370s 00:19:58.545 sys 0m0.209s 00:19:58.545 ************************************ 00:19:58.545 END TEST bdev_hello_world 00:19:58.545 ************************************ 00:19:58.545 16:35:40 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:58.545 16:35:40 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:58.545 16:35:40 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:19:58.545 16:35:40 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:58.545 16:35:40 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:58.545 16:35:40 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:58.545 ************************************ 00:19:58.545 START TEST bdev_bounds 00:19:58.545 ************************************ 00:19:58.545 16:35:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:19:58.545 Process bdevio pid: 100787 00:19:58.545 16:35:40 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=100787 00:19:58.545 16:35:40 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:58.545 16:35:40 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 100787' 00:19:58.545 16:35:40 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 100787 00:19:58.545 16:35:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 100787 ']' 00:19:58.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.545 16:35:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.545 16:35:40 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:58.545 16:35:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:58.545 16:35:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.545 16:35:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:58.545 16:35:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:58.545 [2024-12-06 16:35:40.349033] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:19:58.545 [2024-12-06 16:35:40.349163] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100787 ] 00:19:58.806 [2024-12-06 16:35:40.501868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:58.806 [2024-12-06 16:35:40.530052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.806 [2024-12-06 16:35:40.530137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.806 [2024-12-06 16:35:40.530280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:59.375 16:35:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:59.375 16:35:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:19:59.375 16:35:41 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:59.635 I/O targets: 00:19:59.635 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:19:59.635 00:19:59.635 00:19:59.635 CUnit - A unit testing framework for C - Version 2.1-3 00:19:59.635 http://cunit.sourceforge.net/ 00:19:59.635 00:19:59.635 00:19:59.635 Suite: bdevio tests on: raid5f 00:19:59.635 Test: blockdev write read block ...passed 00:19:59.635 Test: blockdev write zeroes read block ...passed 00:19:59.635 Test: blockdev write zeroes read no split ...passed 00:19:59.635 Test: blockdev write zeroes read split ...passed 00:19:59.635 Test: blockdev write zeroes read split partial ...passed 00:19:59.635 Test: blockdev reset ...passed 00:19:59.635 Test: blockdev write read 8 blocks ...passed 00:19:59.635 Test: blockdev write read size > 128k ...passed 00:19:59.635 Test: blockdev write read invalid size ...passed 00:19:59.635 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:59.635 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:59.635 Test: blockdev write read max offset ...passed 00:19:59.635 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:59.635 Test: blockdev writev readv 8 blocks ...passed 00:19:59.635 Test: blockdev writev readv 30 x 1block ...passed 00:19:59.635 Test: blockdev writev readv block ...passed 00:19:59.635 Test: blockdev writev readv size > 128k ...passed 00:19:59.635 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:59.635 Test: blockdev comparev and writev ...passed 00:19:59.635 Test: blockdev nvme passthru rw ...passed 00:19:59.635 Test: blockdev nvme passthru vendor specific ...passed 00:19:59.635 Test: blockdev nvme admin passthru ...passed 00:19:59.635 Test: blockdev copy ...passed 00:19:59.635 00:19:59.635 Run Summary: Type Total Ran Passed Failed Inactive 00:19:59.635 suites 1 1 n/a 0 0 00:19:59.635 tests 23 23 23 0 0 00:19:59.635 asserts 130 130 130 0 n/a 00:19:59.635 00:19:59.635 Elapsed time = 0.322 seconds 00:19:59.635 0 00:19:59.635 16:35:41 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 100787 00:19:59.635 16:35:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 100787 ']' 00:19:59.635 16:35:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 100787 00:19:59.635 16:35:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:19:59.635 16:35:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:59.894 16:35:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100787 00:19:59.894 16:35:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:59.894 16:35:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:59.894 16:35:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100787' 00:19:59.894 killing process with pid 100787 00:19:59.894 16:35:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 100787 00:19:59.894 16:35:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 100787 00:20:00.153 16:35:41 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:20:00.153 00:20:00.153 real 0m1.471s 00:20:00.153 user 0m3.669s 00:20:00.153 sys 0m0.311s 00:20:00.153 16:35:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:00.153 16:35:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:00.153 ************************************ 00:20:00.153 END TEST bdev_bounds 00:20:00.153 ************************************ 00:20:00.153 16:35:41 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:20:00.153 16:35:41 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:00.153 16:35:41 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:00.153 16:35:41 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:00.153 ************************************ 00:20:00.153 START TEST bdev_nbd 00:20:00.153 ************************************ 00:20:00.153 16:35:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:20:00.153 16:35:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:20:00.153 16:35:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:20:00.153 16:35:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:00.153 16:35:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:00.153 16:35:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:20:00.153 16:35:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:20:00.153 16:35:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:20:00.153 16:35:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:20:00.153 16:35:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:20:00.153 16:35:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:20:00.153 16:35:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:20:00.153 16:35:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:20:00.153 16:35:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:20:00.153 16:35:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:20:00.153 16:35:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:20:00.153 16:35:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=100830 00:20:00.153 16:35:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:00.153 16:35:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:20:00.153 16:35:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 100830 /var/tmp/spdk-nbd.sock 00:20:00.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:20:00.153 16:35:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 100830 ']' 00:20:00.153 16:35:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:20:00.153 16:35:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:00.153 16:35:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:20:00.153 16:35:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:00.153 16:35:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:00.153 [2024-12-06 16:35:41.896649] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:20:00.153 [2024-12-06 16:35:41.896803] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:00.412 [2024-12-06 16:35:42.082226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.412 [2024-12-06 16:35:42.107423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.979 16:35:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:00.979 16:35:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:20:00.979 16:35:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:20:00.979 16:35:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:00.979 16:35:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:20:00.979 16:35:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:20:00.979 16:35:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:20:00.979 16:35:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:00.979 16:35:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:20:00.979 16:35:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:20:00.979 16:35:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:20:00.979 16:35:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:20:00.979 16:35:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:20:00.979 16:35:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:20:00.979 16:35:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:20:01.239 16:35:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:20:01.239 16:35:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:20:01.239 16:35:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:20:01.239 16:35:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:01.239 16:35:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:01.239 16:35:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:01.239 16:35:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:01.239 16:35:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:01.239 16:35:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:01.239 16:35:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:01.239 16:35:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:01.239 16:35:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:01.239 1+0 records in 00:20:01.239 1+0 records out 00:20:01.239 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0004404 s, 9.3 MB/s 00:20:01.239 16:35:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:01.239 16:35:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:01.239 16:35:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:01.239 16:35:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:01.239 16:35:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:01.239 16:35:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:01.239 16:35:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:20:01.239 16:35:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:01.498 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:20:01.498 { 00:20:01.498 "nbd_device": "/dev/nbd0", 00:20:01.498 "bdev_name": "raid5f" 00:20:01.498 } 00:20:01.498 ]' 00:20:01.498 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:20:01.498 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:20:01.498 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:20:01.498 { 00:20:01.498 "nbd_device": "/dev/nbd0", 00:20:01.498 "bdev_name": "raid5f" 00:20:01.498 } 00:20:01.498 ]' 00:20:01.498 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:01.498 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:01.498 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:01.498 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:01.498 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:01.498 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:01.498 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:01.810 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:01.810 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:01.810 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:01.810 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:01.810 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:01.810 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:01.810 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:01.810 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:01.810 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:01.810 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:01.810 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:02.069 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:02.069 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:02.069 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:02.069 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:02.069 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:02.069 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:02.069 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:02.069 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:02.069 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:02.069 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:20:02.069 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:20:02.069 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:20:02.069 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:20:02.069 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:02.069 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:20:02.069 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:20:02.069 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:20:02.069 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:20:02.069 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:20:02.069 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:02.069 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:20:02.069 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:02.069 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:02.069 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:02.069 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:20:02.069 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:02.069 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:02.069 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:20:02.329 /dev/nbd0 00:20:02.329 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:02.329 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:02.329 16:35:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:02.329 16:35:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:02.329 16:35:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:02.329 16:35:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:02.329 16:35:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:02.329 16:35:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:02.329 16:35:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:02.329 16:35:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:02.329 16:35:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:02.329 1+0 records in 00:20:02.329 1+0 records out 00:20:02.329 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236748 s, 17.3 MB/s 00:20:02.329 16:35:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:02.329 16:35:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:02.329 16:35:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:02.329 16:35:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:02.329 16:35:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:02.329 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:02.329 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:02.329 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:02.329 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:02.329 16:35:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:02.588 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:20:02.588 { 00:20:02.588 "nbd_device": "/dev/nbd0", 00:20:02.588 "bdev_name": "raid5f" 00:20:02.588 } 00:20:02.588 ]' 00:20:02.588 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:20:02.588 { 00:20:02.588 "nbd_device": "/dev/nbd0", 00:20:02.588 "bdev_name": "raid5f" 00:20:02.588 } 00:20:02.588 ]' 00:20:02.588 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:02.588 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:20:02.588 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:02.588 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:20:02.588 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:20:02.588 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:20:02.588 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:20:02.588 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:20:02.588 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:20:02.588 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:20:02.588 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:02.588 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:20:02.588 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:02.588 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:20:02.588 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:20:02.588 256+0 records in 00:20:02.588 256+0 records out 00:20:02.588 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0155516 s, 67.4 MB/s 00:20:02.588 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:02.588 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:20:02.588 256+0 records in 00:20:02.588 256+0 records out 00:20:02.588 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0295937 s, 35.4 MB/s 00:20:02.588 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:20:02.588 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:20:02.588 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:02.588 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:20:02.588 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:02.588 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:20:02.588 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:20:02.588 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:02.588 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:20:02.588 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:02.588 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:02.588 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:02.588 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:02.588 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:02.588 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:02.588 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:02.588 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:02.847 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:02.847 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:02.847 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:02.847 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:02.847 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:02.847 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:02.847 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:02.847 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:02.847 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:02.847 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:02.847 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:03.107 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:03.107 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:03.107 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:03.107 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:03.107 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:03.107 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:03.107 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:03.107 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:03.107 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:03.107 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:20:03.107 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:20:03.107 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:20:03.107 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:03.107 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:03.107 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:20:03.107 16:35:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:20:03.366 malloc_lvol_verify 00:20:03.366 16:35:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:20:03.631 f4f5ec3a-945d-457e-83b8-2865ef2a32f1 00:20:03.631 16:35:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:20:03.631 6e0ddb79-24ff-480b-879d-fe6e804b8acc 00:20:03.631 16:35:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:20:03.890 /dev/nbd0 00:20:03.890 16:35:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:20:03.890 16:35:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:20:03.890 16:35:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:20:03.890 16:35:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:20:03.890 16:35:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:20:03.890 Discarding device blocks: 0/4096mke2fs 1.47.0 (5-Feb-2023) 00:20:03.890  done 00:20:03.890 Creating filesystem with 4096 1k blocks and 1024 inodes 00:20:03.890 00:20:03.890 Allocating group tables: 0/1 done 00:20:03.890 Writing inode tables: 0/1 done 00:20:03.890 Creating journal (1024 blocks): done 00:20:03.890 Writing superblocks and filesystem accounting information: 0/1 done 00:20:03.890 00:20:03.890 16:35:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:03.890 16:35:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:03.890 16:35:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:03.890 16:35:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:03.890 16:35:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:03.890 16:35:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:03.890 16:35:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:04.149 16:35:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:04.149 16:35:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:04.149 16:35:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:04.149 16:35:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:04.149 16:35:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:04.149 16:35:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:04.149 16:35:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:04.149 16:35:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:04.149 16:35:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 100830 00:20:04.149 16:35:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 100830 ']' 00:20:04.149 16:35:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 100830 00:20:04.149 16:35:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:20:04.149 16:35:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:04.149 16:35:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100830 00:20:04.149 16:35:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:04.149 16:35:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:04.149 killing process with pid 100830 00:20:04.149 16:35:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100830' 00:20:04.149 16:35:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 100830 00:20:04.149 16:35:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 100830 00:20:04.409 16:35:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:20:04.409 00:20:04.409 real 0m4.410s 00:20:04.409 user 0m6.449s 00:20:04.409 sys 0m1.270s 00:20:04.409 16:35:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:04.409 16:35:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:04.409 ************************************ 00:20:04.409 END TEST bdev_nbd 00:20:04.409 ************************************ 00:20:04.670 16:35:46 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:20:04.670 16:35:46 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:20:04.670 16:35:46 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:20:04.670 16:35:46 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:20:04.670 16:35:46 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:04.670 16:35:46 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:04.670 16:35:46 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:04.670 ************************************ 00:20:04.670 START TEST bdev_fio 00:20:04.670 ************************************ 00:20:04.670 16:35:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:20:04.670 16:35:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:20:04.670 16:35:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:20:04.670 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:20:04.671 16:35:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:20:04.671 16:35:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:20:04.671 16:35:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:20:04.671 16:35:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:20:04.671 16:35:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:20:04.671 16:35:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:04.671 16:35:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:20:04.671 16:35:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:20:04.671 16:35:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:20:04.671 16:35:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:20:04.671 16:35:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:20:04.671 16:35:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:20:04.671 16:35:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:20:04.671 16:35:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:04.671 16:35:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:20:04.671 16:35:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:20:04.671 16:35:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:20:04.671 16:35:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:20:04.671 16:35:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:20:04.671 16:35:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:20:04.671 16:35:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:20:04.671 16:35:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:04.671 16:35:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:20:04.671 16:35:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:20:04.671 16:35:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:20:04.671 16:35:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:04.671 16:35:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:20:04.671 16:35:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:04.671 16:35:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:20:04.671 ************************************ 00:20:04.671 START TEST bdev_fio_rw_verify 00:20:04.671 ************************************ 00:20:04.671 16:35:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:04.671 16:35:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:04.671 16:35:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:04.671 16:35:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:04.671 16:35:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:04.671 16:35:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:04.671 16:35:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:20:04.671 16:35:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:04.671 16:35:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:04.671 16:35:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:04.671 16:35:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:20:04.671 16:35:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:04.671 16:35:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:04.671 16:35:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:04.671 16:35:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:20:04.671 16:35:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:04.671 16:35:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:04.932 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:04.932 fio-3.35 00:20:04.932 Starting 1 thread 00:20:17.179 00:20:17.179 job_raid5f: (groupid=0, jobs=1): err= 0: pid=101029: Fri Dec 6 16:35:57 2024 00:20:17.179 read: IOPS=11.1k, BW=43.2MiB/s (45.3MB/s)(432MiB/10001msec) 00:20:17.179 slat (nsec): min=18858, max=67655, avg=21975.01, stdev=2698.13 00:20:17.179 clat (usec): min=10, max=345, avg=145.01, stdev=53.36 00:20:17.179 lat (usec): min=31, max=377, avg=166.98, stdev=53.96 00:20:17.179 clat percentiles (usec): 00:20:17.179 | 50.000th=[ 147], 99.000th=[ 258], 99.900th=[ 293], 99.990th=[ 322], 00:20:17.179 | 99.999th=[ 347] 00:20:17.179 write: IOPS=11.6k, BW=45.3MiB/s (47.5MB/s)(448MiB/9880msec); 0 zone resets 00:20:17.179 slat (usec): min=8, max=257, avg=18.32, stdev= 3.96 00:20:17.179 clat (usec): min=60, max=1796, avg=328.60, stdev=52.70 00:20:17.179 lat (usec): min=76, max=2054, avg=346.92, stdev=54.33 00:20:17.179 clat percentiles (usec): 00:20:17.179 | 50.000th=[ 330], 99.000th=[ 449], 99.900th=[ 644], 99.990th=[ 1418], 00:20:17.179 | 99.999th=[ 1680] 00:20:17.179 bw ( KiB/s): min=43024, max=48888, per=98.72%, avg=45799.74, stdev=1667.49, samples=19 00:20:17.179 iops : min=10756, max=12222, avg=11449.68, stdev=416.55, samples=19 00:20:17.179 lat (usec) : 20=0.01%, 50=0.01%, 100=11.87%, 250=39.26%, 500=48.76% 00:20:17.179 lat (usec) : 750=0.07%, 1000=0.02% 00:20:17.179 lat (msec) : 2=0.02% 00:20:17.179 cpu : usr=99.07%, sys=0.34%, ctx=21, majf=0, minf=12291 00:20:17.179 IO depths : 1=7.7%, 2=19.9%, 4=55.1%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:17.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:17.179 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:17.179 issued rwts: total=110604,114593,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:17.179 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:17.179 00:20:17.179 Run status group 0 (all jobs): 00:20:17.179 READ: bw=43.2MiB/s (45.3MB/s), 43.2MiB/s-43.2MiB/s (45.3MB/s-45.3MB/s), io=432MiB (453MB), run=10001-10001msec 00:20:17.179 WRITE: bw=45.3MiB/s (47.5MB/s), 45.3MiB/s-45.3MiB/s (47.5MB/s-47.5MB/s), io=448MiB (469MB), run=9880-9880msec 00:20:17.179 ----------------------------------------------------- 00:20:17.179 Suppressions used: 00:20:17.179 count bytes template 00:20:17.179 1 7 /usr/src/fio/parse.c 00:20:17.179 552 52992 /usr/src/fio/iolog.c 00:20:17.179 1 8 libtcmalloc_minimal.so 00:20:17.179 1 904 libcrypto.so 00:20:17.179 ----------------------------------------------------- 00:20:17.179 00:20:17.179 00:20:17.179 real 0m11.223s 00:20:17.179 user 0m11.316s 00:20:17.179 sys 0m0.511s 00:20:17.179 16:35:57 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:17.179 16:35:57 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:20:17.179 ************************************ 00:20:17.179 END TEST bdev_fio_rw_verify 00:20:17.179 ************************************ 00:20:17.179 16:35:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:20:17.179 16:35:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:17.179 16:35:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:20:17.179 16:35:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:17.179 16:35:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:20:17.179 16:35:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:20:17.179 16:35:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:20:17.179 16:35:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:20:17.179 16:35:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:20:17.179 16:35:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:20:17.179 16:35:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:20:17.179 16:35:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:17.179 16:35:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:20:17.179 16:35:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:20:17.179 16:35:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:20:17.179 16:35:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:20:17.180 16:35:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "fbe08de6-1ce0-4113-9f63-b2a9f9b8792a"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "fbe08de6-1ce0-4113-9f63-b2a9f9b8792a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "fbe08de6-1ce0-4113-9f63-b2a9f9b8792a",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "225e4cab-30fd-4557-a78c-9267caf043a7",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "7d4dbc90-88af-4a59-9348-529a1bc4d1eb",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "2b81db53-7a63-4973-85d1-aa9ba59ca02c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:20:17.180 16:35:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:20:17.180 16:35:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:20:17.180 16:35:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:17.180 16:35:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:20:17.180 /home/vagrant/spdk_repo/spdk 00:20:17.180 16:35:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:20:17.180 16:35:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:20:17.180 00:20:17.180 real 0m11.518s 00:20:17.180 user 0m11.448s 00:20:17.180 sys 0m0.644s 00:20:17.180 16:35:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:17.180 16:35:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:20:17.180 ************************************ 00:20:17.180 END TEST bdev_fio 00:20:17.180 ************************************ 00:20:17.180 16:35:57 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:17.180 16:35:57 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:17.180 16:35:57 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:20:17.180 16:35:57 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:17.180 16:35:57 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:17.180 ************************************ 00:20:17.180 START TEST bdev_verify 00:20:17.180 ************************************ 00:20:17.180 16:35:57 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:17.180 [2024-12-06 16:35:57.938332] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:20:17.180 [2024-12-06 16:35:57.938460] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101177 ] 00:20:17.180 [2024-12-06 16:35:58.109607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:17.180 [2024-12-06 16:35:58.138051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.180 [2024-12-06 16:35:58.138173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:17.180 Running I/O for 5 seconds... 00:20:18.690 15469.00 IOPS, 60.43 MiB/s [2024-12-06T16:36:01.468Z] 15697.00 IOPS, 61.32 MiB/s [2024-12-06T16:36:02.419Z] 15665.33 IOPS, 61.19 MiB/s [2024-12-06T16:36:03.356Z] 15847.25 IOPS, 61.90 MiB/s [2024-12-06T16:36:03.356Z] 15795.00 IOPS, 61.70 MiB/s 00:20:21.517 Latency(us) 00:20:21.517 [2024-12-06T16:36:03.356Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.517 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:21.517 Verification LBA range: start 0x0 length 0x2000 00:20:21.517 raid5f : 5.01 7852.94 30.68 0.00 0.00 24478.24 222.69 21864.41 00:20:21.517 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:21.517 Verification LBA range: start 0x2000 length 0x2000 00:20:21.517 raid5f : 5.01 7936.87 31.00 0.00 0.00 24199.71 187.81 21864.41 00:20:21.517 [2024-12-06T16:36:03.356Z] =================================================================================================================== 00:20:21.517 [2024-12-06T16:36:03.356Z] Total : 15789.81 61.68 0.00 0.00 24338.18 187.81 21864.41 00:20:21.776 00:20:21.776 real 0m5.718s 00:20:21.776 user 0m10.647s 00:20:21.776 sys 0m0.244s 00:20:21.776 16:36:03 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:21.776 16:36:03 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:20:21.776 ************************************ 00:20:21.776 END TEST bdev_verify 00:20:21.776 ************************************ 00:20:22.035 16:36:03 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:22.035 16:36:03 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:20:22.035 16:36:03 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:22.035 16:36:03 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:22.035 ************************************ 00:20:22.035 START TEST bdev_verify_big_io 00:20:22.035 ************************************ 00:20:22.035 16:36:03 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:22.035 [2024-12-06 16:36:03.709470] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:20:22.035 [2024-12-06 16:36:03.709608] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101260 ] 00:20:22.293 [2024-12-06 16:36:03.880864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:22.293 [2024-12-06 16:36:03.909369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.293 [2024-12-06 16:36:03.909473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:22.293 Running I/O for 5 seconds... 00:20:24.605 758.00 IOPS, 47.38 MiB/s [2024-12-06T16:36:07.409Z] 854.50 IOPS, 53.41 MiB/s [2024-12-06T16:36:08.345Z] 909.00 IOPS, 56.81 MiB/s [2024-12-06T16:36:09.285Z] 935.75 IOPS, 58.48 MiB/s [2024-12-06T16:36:09.545Z] 964.80 IOPS, 60.30 MiB/s 00:20:27.706 Latency(us) 00:20:27.706 [2024-12-06T16:36:09.545Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.706 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:27.706 Verification LBA range: start 0x0 length 0x200 00:20:27.706 raid5f : 5.27 481.90 30.12 0.00 0.00 6575689.72 151.14 302209.68 00:20:27.706 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:27.706 Verification LBA range: start 0x200 length 0x200 00:20:27.706 raid5f : 5.21 487.49 30.47 0.00 0.00 6473640.81 144.88 298546.53 00:20:27.706 [2024-12-06T16:36:09.545Z] =================================================================================================================== 00:20:27.706 [2024-12-06T16:36:09.545Z] Total : 969.39 60.59 0.00 0.00 6524645.17 144.88 302209.68 00:20:27.966 00:20:27.966 real 0m5.965s 00:20:27.966 user 0m11.177s 00:20:27.966 sys 0m0.214s 00:20:27.966 16:36:09 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:27.966 16:36:09 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:20:27.966 ************************************ 00:20:27.966 END TEST bdev_verify_big_io 00:20:27.966 ************************************ 00:20:27.966 16:36:09 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:27.966 16:36:09 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:27.966 16:36:09 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:27.966 16:36:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:27.966 ************************************ 00:20:27.966 START TEST bdev_write_zeroes 00:20:27.966 ************************************ 00:20:27.966 16:36:09 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:27.966 [2024-12-06 16:36:09.730988] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:20:27.966 [2024-12-06 16:36:09.731121] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101341 ] 00:20:28.226 [2024-12-06 16:36:09.903089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.226 [2024-12-06 16:36:09.929619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.486 Running I/O for 1 seconds... 00:20:29.424 27303.00 IOPS, 106.65 MiB/s 00:20:29.424 Latency(us) 00:20:29.424 [2024-12-06T16:36:11.263Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.424 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:29.424 raid5f : 1.01 27282.95 106.57 0.00 0.00 4677.19 1566.85 6582.22 00:20:29.424 [2024-12-06T16:36:11.263Z] =================================================================================================================== 00:20:29.424 [2024-12-06T16:36:11.263Z] Total : 27282.95 106.57 0.00 0.00 4677.19 1566.85 6582.22 00:20:29.684 00:20:29.684 real 0m1.681s 00:20:29.684 user 0m1.370s 00:20:29.684 sys 0m0.200s 00:20:29.684 16:36:11 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:29.684 16:36:11 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:20:29.684 ************************************ 00:20:29.684 END TEST bdev_write_zeroes 00:20:29.684 ************************************ 00:20:29.684 16:36:11 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:29.684 16:36:11 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:29.684 16:36:11 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:29.684 16:36:11 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:29.684 ************************************ 00:20:29.684 START TEST bdev_json_nonenclosed 00:20:29.684 ************************************ 00:20:29.684 16:36:11 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:29.684 [2024-12-06 16:36:11.479705] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:20:29.684 [2024-12-06 16:36:11.479831] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101379 ] 00:20:29.944 [2024-12-06 16:36:11.650300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.944 [2024-12-06 16:36:11.676448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.944 [2024-12-06 16:36:11.676552] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:20:29.944 [2024-12-06 16:36:11.676574] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:29.944 [2024-12-06 16:36:11.676586] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:29.944 00:20:29.944 real 0m0.365s 00:20:29.944 user 0m0.146s 00:20:29.944 sys 0m0.116s 00:20:29.944 16:36:11 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:29.944 16:36:11 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:20:29.944 ************************************ 00:20:29.944 END TEST bdev_json_nonenclosed 00:20:29.944 ************************************ 00:20:30.203 16:36:11 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:30.203 16:36:11 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:30.204 16:36:11 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:30.204 16:36:11 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:30.204 ************************************ 00:20:30.204 START TEST bdev_json_nonarray 00:20:30.204 ************************************ 00:20:30.204 16:36:11 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:30.204 [2024-12-06 16:36:11.906219] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 23.11.0 initialization... 00:20:30.204 [2024-12-06 16:36:11.906376] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101409 ] 00:20:30.463 [2024-12-06 16:36:12.075675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.463 [2024-12-06 16:36:12.101221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:30.463 [2024-12-06 16:36:12.101330] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:20:30.463 [2024-12-06 16:36:12.101353] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:30.463 [2024-12-06 16:36:12.101372] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:30.463 00:20:30.463 real 0m0.348s 00:20:30.463 user 0m0.142s 00:20:30.463 sys 0m0.103s 00:20:30.463 16:36:12 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:30.463 16:36:12 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:20:30.463 ************************************ 00:20:30.463 END TEST bdev_json_nonarray 00:20:30.463 ************************************ 00:20:30.463 16:36:12 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:20:30.463 16:36:12 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:20:30.463 16:36:12 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:20:30.463 16:36:12 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:20:30.463 16:36:12 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:20:30.463 16:36:12 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:20:30.463 16:36:12 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:30.463 16:36:12 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:20:30.463 16:36:12 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:20:30.463 16:36:12 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:20:30.463 16:36:12 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:20:30.463 00:20:30.463 real 0m34.497s 00:20:30.463 user 0m47.298s 00:20:30.463 sys 0m4.255s 00:20:30.463 16:36:12 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:30.463 16:36:12 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:30.463 ************************************ 00:20:30.463 END TEST blockdev_raid5f 00:20:30.463 ************************************ 00:20:30.463 16:36:12 -- spdk/autotest.sh@194 -- # uname -s 00:20:30.463 16:36:12 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:20:30.463 16:36:12 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:20:30.463 16:36:12 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:20:30.463 16:36:12 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:20:30.463 16:36:12 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:20:30.463 16:36:12 -- spdk/autotest.sh@260 -- # timing_exit lib 00:20:30.463 16:36:12 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:30.463 16:36:12 -- common/autotest_common.sh@10 -- # set +x 00:20:30.724 16:36:12 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:20:30.724 16:36:12 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:20:30.724 16:36:12 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:20:30.724 16:36:12 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:20:30.724 16:36:12 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:20:30.724 16:36:12 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:20:30.724 16:36:12 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:20:30.724 16:36:12 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:20:30.724 16:36:12 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:20:30.724 16:36:12 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:20:30.724 16:36:12 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:20:30.724 16:36:12 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:20:30.724 16:36:12 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:20:30.724 16:36:12 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:20:30.724 16:36:12 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:20:30.724 16:36:12 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:20:30.724 16:36:12 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:20:30.724 16:36:12 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:20:30.724 16:36:12 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:20:30.724 16:36:12 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:20:30.724 16:36:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:30.724 16:36:12 -- common/autotest_common.sh@10 -- # set +x 00:20:30.724 16:36:12 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:20:30.724 16:36:12 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:20:30.724 16:36:12 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:20:30.724 16:36:12 -- common/autotest_common.sh@10 -- # set +x 00:20:32.660 INFO: APP EXITING 00:20:32.660 INFO: killing all VMs 00:20:32.660 INFO: killing vhost app 00:20:32.660 INFO: EXIT DONE 00:20:32.919 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:33.178 Waiting for block devices as requested 00:20:33.178 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:33.178 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:34.123 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:34.123 Cleaning 00:20:34.123 Removing: /var/run/dpdk/spdk0/config 00:20:34.123 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:20:34.123 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:20:34.123 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:20:34.123 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:20:34.123 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:20:34.123 Removing: /var/run/dpdk/spdk0/hugepage_info 00:20:34.123 Removing: /dev/shm/spdk_tgt_trace.pid69462 00:20:34.123 Removing: /var/run/dpdk/spdk0 00:20:34.123 Removing: /var/run/dpdk/spdk_pid100464 00:20:34.123 Removing: /var/run/dpdk/spdk_pid100722 00:20:34.123 Removing: /var/run/dpdk/spdk_pid100756 00:20:34.123 Removing: /var/run/dpdk/spdk_pid100787 00:20:34.123 Removing: /var/run/dpdk/spdk_pid101014 00:20:34.123 Removing: /var/run/dpdk/spdk_pid101177 00:20:34.123 Removing: /var/run/dpdk/spdk_pid101260 00:20:34.123 Removing: /var/run/dpdk/spdk_pid101341 00:20:34.123 Removing: /var/run/dpdk/spdk_pid101379 00:20:34.123 Removing: /var/run/dpdk/spdk_pid101409 00:20:34.123 Removing: /var/run/dpdk/spdk_pid69298 00:20:34.123 Removing: /var/run/dpdk/spdk_pid69462 00:20:34.123 Removing: /var/run/dpdk/spdk_pid69669 00:20:34.123 Removing: /var/run/dpdk/spdk_pid69751 00:20:34.123 Removing: /var/run/dpdk/spdk_pid69785 00:20:34.123 Removing: /var/run/dpdk/spdk_pid69891 00:20:34.123 Removing: /var/run/dpdk/spdk_pid69909 00:20:34.123 Removing: /var/run/dpdk/spdk_pid70097 00:20:34.123 Removing: /var/run/dpdk/spdk_pid70176 00:20:34.123 Removing: /var/run/dpdk/spdk_pid70250 00:20:34.123 Removing: /var/run/dpdk/spdk_pid70350 00:20:34.123 Removing: /var/run/dpdk/spdk_pid70436 00:20:34.123 Removing: /var/run/dpdk/spdk_pid70470 00:20:34.123 Removing: /var/run/dpdk/spdk_pid70507 00:20:34.123 Removing: /var/run/dpdk/spdk_pid70577 00:20:34.123 Removing: /var/run/dpdk/spdk_pid70696 00:20:34.123 Removing: /var/run/dpdk/spdk_pid71131 00:20:34.123 Removing: /var/run/dpdk/spdk_pid71185 00:20:34.123 Removing: /var/run/dpdk/spdk_pid71231 00:20:34.123 Removing: /var/run/dpdk/spdk_pid71247 00:20:34.123 Removing: /var/run/dpdk/spdk_pid71311 00:20:34.123 Removing: /var/run/dpdk/spdk_pid71327 00:20:34.123 Removing: /var/run/dpdk/spdk_pid71385 00:20:34.123 Removing: /var/run/dpdk/spdk_pid71401 00:20:34.123 Removing: /var/run/dpdk/spdk_pid71454 00:20:34.123 Removing: /var/run/dpdk/spdk_pid71472 00:20:34.123 Removing: /var/run/dpdk/spdk_pid71514 00:20:34.123 Removing: /var/run/dpdk/spdk_pid71532 00:20:34.383 Removing: /var/run/dpdk/spdk_pid71665 00:20:34.383 Removing: /var/run/dpdk/spdk_pid71701 00:20:34.383 Removing: /var/run/dpdk/spdk_pid71779 00:20:34.383 Removing: /var/run/dpdk/spdk_pid72956 00:20:34.383 Removing: /var/run/dpdk/spdk_pid73151 00:20:34.383 Removing: /var/run/dpdk/spdk_pid73280 00:20:34.383 Removing: /var/run/dpdk/spdk_pid73890 00:20:34.383 Removing: /var/run/dpdk/spdk_pid74091 00:20:34.383 Removing: /var/run/dpdk/spdk_pid74220 00:20:34.383 Removing: /var/run/dpdk/spdk_pid74830 00:20:34.383 Removing: /var/run/dpdk/spdk_pid75144 00:20:34.383 Removing: /var/run/dpdk/spdk_pid75278 00:20:34.383 Removing: /var/run/dpdk/spdk_pid76621 00:20:34.383 Removing: /var/run/dpdk/spdk_pid76863 00:20:34.383 Removing: /var/run/dpdk/spdk_pid76992 00:20:34.383 Removing: /var/run/dpdk/spdk_pid78334 00:20:34.383 Removing: /var/run/dpdk/spdk_pid78576 00:20:34.383 Removing: /var/run/dpdk/spdk_pid78705 00:20:34.383 Removing: /var/run/dpdk/spdk_pid80057 00:20:34.383 Removing: /var/run/dpdk/spdk_pid80486 00:20:34.383 Removing: /var/run/dpdk/spdk_pid80621 00:20:34.383 Removing: /var/run/dpdk/spdk_pid82051 00:20:34.383 Removing: /var/run/dpdk/spdk_pid82305 00:20:34.383 Removing: /var/run/dpdk/spdk_pid82439 00:20:34.383 Removing: /var/run/dpdk/spdk_pid83869 00:20:34.383 Removing: /var/run/dpdk/spdk_pid84123 00:20:34.383 Removing: /var/run/dpdk/spdk_pid84252 00:20:34.383 Removing: /var/run/dpdk/spdk_pid85682 00:20:34.383 Removing: /var/run/dpdk/spdk_pid86164 00:20:34.383 Removing: /var/run/dpdk/spdk_pid86293 00:20:34.383 Removing: /var/run/dpdk/spdk_pid86428 00:20:34.383 Removing: /var/run/dpdk/spdk_pid86839 00:20:34.383 Removing: /var/run/dpdk/spdk_pid87554 00:20:34.383 Removing: /var/run/dpdk/spdk_pid87945 00:20:34.383 Removing: /var/run/dpdk/spdk_pid88636 00:20:34.383 Removing: /var/run/dpdk/spdk_pid89071 00:20:34.383 Removing: /var/run/dpdk/spdk_pid89834 00:20:34.383 Removing: /var/run/dpdk/spdk_pid90233 00:20:34.383 Removing: /var/run/dpdk/spdk_pid92149 00:20:34.383 Removing: /var/run/dpdk/spdk_pid92582 00:20:34.383 Removing: /var/run/dpdk/spdk_pid93000 00:20:34.383 Removing: /var/run/dpdk/spdk_pid95038 00:20:34.383 Removing: /var/run/dpdk/spdk_pid95507 00:20:34.383 Removing: /var/run/dpdk/spdk_pid96013 00:20:34.383 Removing: /var/run/dpdk/spdk_pid97037 00:20:34.383 Removing: /var/run/dpdk/spdk_pid97349 00:20:34.383 Removing: /var/run/dpdk/spdk_pid98263 00:20:34.384 Removing: /var/run/dpdk/spdk_pid98575 00:20:34.384 Removing: /var/run/dpdk/spdk_pid99487 00:20:34.384 Removing: /var/run/dpdk/spdk_pid99799 00:20:34.384 Clean 00:20:34.644 16:36:16 -- common/autotest_common.sh@1453 -- # return 0 00:20:34.644 16:36:16 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:20:34.644 16:36:16 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:34.644 16:36:16 -- common/autotest_common.sh@10 -- # set +x 00:20:34.644 16:36:16 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:20:34.644 16:36:16 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:34.644 16:36:16 -- common/autotest_common.sh@10 -- # set +x 00:20:34.644 16:36:16 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:34.644 16:36:16 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:20:34.644 16:36:16 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:20:34.644 16:36:16 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:20:34.644 16:36:16 -- spdk/autotest.sh@398 -- # hostname 00:20:34.644 16:36:16 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:20:34.904 geninfo: WARNING: invalid characters removed from testname! 00:20:56.839 16:36:38 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:59.372 16:36:41 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:01.908 16:36:43 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:03.869 16:36:45 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:06.405 16:36:47 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:08.308 16:36:49 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:10.214 16:36:51 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:21:10.214 16:36:51 -- spdk/autorun.sh@1 -- $ timing_finish 00:21:10.214 16:36:51 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:21:10.214 16:36:51 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:21:10.214 16:36:51 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:21:10.214 16:36:51 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:10.214 + [[ -n 6159 ]] 00:21:10.214 + sudo kill 6159 00:21:10.222 [Pipeline] } 00:21:10.235 [Pipeline] // timeout 00:21:10.241 [Pipeline] } 00:21:10.256 [Pipeline] // stage 00:21:10.263 [Pipeline] } 00:21:10.278 [Pipeline] // catchError 00:21:10.289 [Pipeline] stage 00:21:10.291 [Pipeline] { (Stop VM) 00:21:10.301 [Pipeline] sh 00:21:10.575 + vagrant halt 00:21:13.107 ==> default: Halting domain... 00:21:21.314 [Pipeline] sh 00:21:21.596 + vagrant destroy -f 00:21:24.186 ==> default: Removing domain... 00:21:24.198 [Pipeline] sh 00:21:24.480 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:21:24.490 [Pipeline] } 00:21:24.503 [Pipeline] // stage 00:21:24.507 [Pipeline] } 00:21:24.522 [Pipeline] // dir 00:21:24.527 [Pipeline] } 00:21:24.541 [Pipeline] // wrap 00:21:24.546 [Pipeline] } 00:21:24.559 [Pipeline] // catchError 00:21:24.567 [Pipeline] stage 00:21:24.569 [Pipeline] { (Epilogue) 00:21:24.581 [Pipeline] sh 00:21:24.864 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:21:29.076 [Pipeline] catchError 00:21:29.079 [Pipeline] { 00:21:29.091 [Pipeline] sh 00:21:29.377 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:21:29.377 Artifacts sizes are good 00:21:29.387 [Pipeline] } 00:21:29.401 [Pipeline] // catchError 00:21:29.412 [Pipeline] archiveArtifacts 00:21:29.420 Archiving artifacts 00:21:29.534 [Pipeline] cleanWs 00:21:29.595 [WS-CLEANUP] Deleting project workspace... 00:21:29.595 [WS-CLEANUP] Deferred wipeout is used... 00:21:29.603 [WS-CLEANUP] done 00:21:29.606 [Pipeline] } 00:21:29.621 [Pipeline] // stage 00:21:29.627 [Pipeline] } 00:21:29.641 [Pipeline] // node 00:21:29.646 [Pipeline] End of Pipeline 00:21:29.688 Finished: SUCCESS